WorldWideScience

Sample records for micap codes function

  1. A user's guide to MICAP: A Monte Carlo Ionization Chamber Analysis Package

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, J.O.; Gabriel, T.A.

    1988-01-01

    A collection of computer codes entitled MICAP - A Monte Carlo Ionization Chamber Analysis Package has been developed to determine the response of a gas-filled cavity ionization chamber in a mixed neutron and photon radiation environment. In particular, MICAP determines the neutron, photon, and total response of the ionization chamber. The applicability of MICAP encompasses all aspects of mixed field dosimetry analysis including detector design, preexperimental planning and post-experimental analysis. The MICAP codes include: RDNDF for reading and processing ENDF/B-formatted cross section files, MICRO for manipulating microscopic cross section data sets, MACRO for creating macroscopic cross section data sets, NEUTRON for transporting neutrons, RECOMB for calculating correction data due to ionization chamber saturation effects, HEAVY for transporting recoil heavy ions and charged particles, PECSP for generating photon and electron cross section and material data sets, PHOTPREP for generating photon source input tapes, and PHOTON for transporting photons and electrons. The codes are generally tailored to provide numerous input options, but whenever possible, default values are supplied which yield adequate results. All of the MICAP codes function independently, and are operational on the ORNL IBM 3033 computer system. 14 refs., 27 figs., 49 tabs.

  2. MICAP, Ionization Chamber Detector Response by Monte-Carlo

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Description of program or function: MICAP has been developed to determine the response of a gas-filled cavity ionization chamber or other detector type (plastic scintillator, calorimeter) in a mixed neutron and photon radiation environment. In particular, MICAP determines the neutron, photon, and total response of the detector system. The applicability of MICAP encompasses all aspects of mixed field dosimetry analysis including detector design, pre-experimental planning and post-experimental analysis. MICAP is a modular code system developed to be general with respect to problem applicability The transport modules utilize combinatorial geometry to accurately model the source/detector geometry and also use continuous energy and angle cross section and material data to represent the materials for a particular problem. 2 - Method of solution: The calculational scheme used in MICAP follows individual radiation particles incident on the detector wall material. The incident neutrons produce photons and heavy charged particles, and both primary and secondary photons produce electrons and positrons. As these charged particles enter or are produced in the detector material, they lose energy and produce ion pairs until their energy is completely dissipated or until they escape the detector. Ion recombination effects are included along the path of each charged particle rather than applied as an integral correction to the final result. The neutron response is determined from the energy deposition resulting from the transport of the charged particles and recoil heavy ions produced via the neutron interactions with the detector materials. The photon response is determined from the transport of both the primary photon radiation incident on the detector and also the secondary photons produced via the neutron interactions. MICAP not only yields the energy deposition by particle type and total energy deposited, but also the particular type of reaction, i.e. elastic scattering

  3. Comparison of the Retrieval of Sea Surface Salinity Using Different Instrument Configurations of MICAP

    Directory of Open Access Journals (Sweden)

    Lanjie Zhang

    2018-04-01

    Full Text Available The Microwave Imager Combined Active/Passive (MICAP has been designed to simultaneously retrieve sea surface salinity (SSS, sea surface temperature (SST and wind speed (WS, and its performance has also been preliminarily analyzed. To determine the influence of the first guess values uncertainties on the retrieved parameters of MICAP, the retrieval accuracies of SSS, SST, and WS are estimated at various noise levels. The results suggest that the errors on the retrieved SSS have not increased dues poorly known initial values of SST and WS, since the MICAP can simultaneously acquire SST information and correct ocean surface roughness. The main objective of this paper is to obtain the simplified instrument configuration of MICAP without loss of the SSS, SST, and WS retrieval accuracies. Comparisons are conducted between three different instrument configurations in retrieval mode, based on the simulation measurements of MICAP. The retrieval results tend to prove that, without the 23.8 GHz channel, the errors on the retrieved SSS, SST, and WS for MICAP could also satisfy the accuracy requirements well globally during only one satellite pass. By contrast, without the 1.26 GHz scatterometer, there are relatively large increases in the SSS, SST, and WS errors at middle/low latitudes.

  4. Simulations of neutron transport at low energy: a comparison between GEANT and MCNP.

    Science.gov (United States)

    Colonna, N; Altieri, S

    2002-06-01

    The use of the simulation tool GEANT for neutron transport at energies below 20 MeV is discussed, in particular with regard to shielding and dose calculations. The reliability of the GEANT/MICAP package for neutron transport in a wide energy range has been verified by comparing the results of simulations performed with this package in a wide energy range with the prediction of MCNP-4B, a code commonly used for neutron transport at low energy. A reasonable agreement between the results of the two codes is found for the neutron flux through a slab of material (iron and ordinary concrete), as well as for the dose released in soft tissue by neutrons. These results justify the use of the GEANT/MICAP code for neutron transport in a wide range of applications, including health physics problems.

  5. Monte-Carlo simulations of neutron shielding for the ATLAS forward region

    CERN Document Server

    Stekl, I; Kovalenko, V E; Vorobel, V; Leroy, C; Piquemal, F; Eschbach, R; Marquet, C

    2000-01-01

    The effectiveness of different types of neutron shielding for the ATLAS forward region has been studied by means of Monte-Carlo simulations and compared with the results of an experiment performed at the CERN PS. The simulation code is based on GEANT, FLUKA, MICAP and GAMLIB. GAMLIB is a new library including processes with gamma-rays produced in (n, gamma), (n, n'gamma) neutron reactions and is interfaced to the MICAP code. The effectiveness of different types of shielding against neutrons and gamma-rays, composed from different types of material, such as pure polyethylene, borated polyethylene, lithium-filled polyethylene, lead and iron, were compared. The results from Monte-Carlo simulations were compared to the results obtained from the experiment. The simulation results reproduce the experimental data well. This agreement supports the correctness of the simulation code used to describe the generation, spreading and absorption of neutrons (up to thermal energies) and gamma-rays in the shielding materials....

  6. Coded Network Function Virtualization

    DEFF Research Database (Denmark)

    Al-Shuwaili, A.; Simone, O.; Kliewer, J.

    2016-01-01

    Network function virtualization (NFV) prescribes the instantiation of network functions on general-purpose network devices, such as servers and switches. While yielding a more flexible and cost-effective network architecture, NFV is potentially limited by the fact that commercial off......-the-shelf hardware is less reliable than the dedicated network elements used in conventional cellular deployments. The typical solution for this problem is to duplicate network functions across geographically distributed hardware in order to ensure diversity. In contrast, this letter proposes to leverage channel...... coding in order to enhance the robustness on NFV to hardware failure. The proposed approach targets the network function of uplink channel decoding, and builds on the algebraic structure of the encoded data frames in order to perform in-network coding on the signals to be processed at different servers...

  7. Order functions and evaluation codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pellikaan, Ruud; van Lint, Jack

    1997-01-01

    Based on the notion of an order function we construct and determine the parameters of a class of error-correcting evaluation codes. This class includes the one-point algebraic geometry codes as wella s the generalized Reed-Muller codes and the parameters are detremined without using the heavy...... machinery of algebraic geometry....

  8. ARES: automated response function code. Users manual

    International Nuclear Information System (INIS)

    Maung, T.; Reynolds, G.M.

    1981-06-01

    This ARES user's manual provides detailed instructions for a general understanding of the Automated Response Function Code and gives step by step instructions for using the complete code package on a HP-1000 system. This code is designed to calculate response functions of NaI gamma-ray detectors, with cylindrical or rectangular geometries

  9. Esophageal function testing: Billing and coding update.

    Science.gov (United States)

    Khan, A; Massey, B; Rao, S; Pandolfino, J

    2018-01-01

    Esophageal function testing is being increasingly utilized in diagnosis and management of esophageal disorders. There have been several recent technological advances in the field to allow practitioners the ability to more accurately assess and treat such conditions, but there has been a relative lack of education in the literature regarding the associated Common Procedural Terminology (CPT) codes and methods of reimbursement. This review, commissioned and supported by the American Neurogastroenterology and Motility Society Council, aims to summarize each of the CPT codes for esophageal function testing and show the trends of associated reimbursement, as well as recommend coding methods in a practical context. We also aim to encourage many of these codes to be reviewed on a gastrointestinal (GI) societal level, by providing evidence of both discrepancies in coding definitions and inadequate reimbursement in this new era of esophageal function testing. © 2017 John Wiley & Sons Ltd.

  10. Simulation of Resistive Plate Chamber sensitivity to neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Altieri, S. E-mail: saverio.altieri@pv.infn.it; Belli, G.; Bruno, G.; Merlo, M.; Ratti, S.P.; Riccardi, C.; Torre, P.; Vitulo, P.; Abbrescia, M.; Colaleo, A.; Iaselli, G.; Loddo, F.; Maggi, M.; Marangelli, B.; Natali, S.; Nuzzo, S.; Pugliese, G.; Ranieri, A.; Romano, F

    2001-04-01

    The Resistive Plate Chambers (RPCs) sensitivity to neutrons has been simulated using GEANT code with MICAP and FLUKA interfaces. The calculations have been performed as a function of the neutrons energy in the range 0.02 eV-1 GeV. To evaluate the response of the detector in the LHC background environment, the neutron energy spectrum expected in the CMS muon barrel has been taken into account; a hit rate due to neutrons of about 0.6 Hz cm{sup -2} has been estimated for a 250x250 cm{sup 2} RPC in the RB1 station.

  11. Comparison of the learning of two notations: A pilot study.

    Science.gov (United States)

    Akram, Ashfaq; Fuadfuad, Maher D; Malik, Arshad Mahmood; Nasir Alzurfi, Balsam Mahdi; Changmai, Manah Chandra; Madlena, Melinda

    2017-04-01

    MICAP is a new notation in which the teeth are indicated by letters (I-incisor, C-canine, P-premolar, M-molar) and numbers [1,2,3] which are written superscript and subscript on the relevant letters. FDI tooth notation is a two digit system where one digit shows quadrant and the second one shows the tooth of the quadrant. This study aimed to compare the short term retention of knowledge of two notation systems (FDI two digit system and MICAP notation) by lecture method. Undergraduate students [N=80] of three schools participated in a cross-over study. Two theory-driven classroom based lectures on MICAP notation and FDI notation were delivered separately. Data were collected using eight randomly selected permanent teeth to be written in MICAP format and FDI format at pretest (before the lecture), post-test I (immediately after lecture) and post-test II (one week after the lecture). Analysis was done by SPSS version 20.0 using repeated measures ANCOVA and independent t-test. The results of pre-test and post-test I were similar for FDI education. Similar results were found between post-test I and post-test II for MICAP and FDI notations. The study findings indicated that the two notations (FDI and MICAP) were equally mind cognitive. However, the sample size used in this study may not reflect the global scenario. Therefore, we suggest more studies to be performed for prospective adaptation of MICAP in dental curriculum.

  12. Binary codes with impulse autocorrelation functions for dynamic experiments

    International Nuclear Information System (INIS)

    Corran, E.R.; Cummins, J.D.

    1962-09-01

    A series of binary codes exist which have autocorrelation functions approximating to an impulse function. Signals whose behaviour in time can be expressed by such codes have spectra which are 'whiter' over a limited bandwidth and for a finite time than signals from a white noise generator. These codes are used to determine system dynamic responses using the correlation technique. Programmes have been written to compute codes of arbitrary length and to compute 'cyclic' autocorrelation and cross-correlation functions. Complete listings of these programmes are given, and a code of 1019 bits is presented. (author)

  13. Comparison of the learning of two notations: A pilot study

    Directory of Open Access Journals (Sweden)

    ASHFAQ AKRAM

    2017-05-01

    Full Text Available Introduction: MICAP is a new notation in which the teeth are indicated by letters (I-incisor, C-canine, P-premolar, M-molar and numbers [1,2,3] which are written superscript and subscript on the relevant letters. FDI tooth notation is a two digit system where one digit shows quadrant and the second one shows the tooth of the quadrant. This study aimed to compare the short term retention of knowledge of two notation systems (FDI two digit system and MICAP notation by lecture method. Methods: Undergraduate students [N=80] of three schools participated in a cross-over study. Two theory-driven classroom based lectures on MICAP notation and FDI notation were delivered separately. Data were collected using eight randomly selected permanent teeth to be written in MICAP format and FDI format at pretest (before the lecture, post-test I (immediately after lecture and post-test II (one week after the lecture. Analysis was done by SPSS version 20.0 using repeated measures ANCOVA and independent t-test. Results: The results of pre-test and post-test I were similar for FDI education. Similar results were found between post-test I and post-test II for MICAP and FDI notations. Conclusion: The study findings indicated that the two notations (FDI and MICAP were equally mind cognitive. However, the sample size used in this study may not reflect the global scenario. Therefore, we suggest more studies to be performed for prospective adaptation of MICAP in dental curriculum.

  14. Function and Application Areas in Medicine of Non-Coding RNA

    Directory of Open Access Journals (Sweden)

    Figen Guzelgul

    2009-06-01

    Full Text Available RNA is the genetic material converting the genetic code that it gets from DNA into protein. While less than 2 % of RNA is converted into protein , more than 98 % of it can not be converted into protein and named as non-coding RNAs. 70 % of noncoding RNAs consists of introns , however, the rest part of them consists of exons. Non-coding RNAs are examined in two classes according to their size and functions. Whereas they are classified as long non-coding and small non-coding RNAs according to their size , they are grouped as housekeeping non-coding RNAs and regulating non-coding RNAs according to their function. For long years ,these non-coding RNAs have been considered as non-functional. However, today, it has been proved that these non-coding RNAs play role in regulating genes and in structural, functional and catalitic roles of RNAs converted into protein. Due to its taking a role in gene silencing mechanism, particularly in medical world , non-coding RNAs have led to significant developments. RNAi technolgy , which is used in designing drugs to be used in treatment of various diseases , is a ray of hope for medical world. [Archives Medical Review Journal 2009; 18(3.000: 141-155

  15. ARES: automated response function code. Users manual. [HPGAM and LSQVM

    Energy Technology Data Exchange (ETDEWEB)

    Maung, T.; Reynolds, G.M.

    1981-06-01

    This ARES user's manual provides detailed instructions for a general understanding of the Automated Response Function Code and gives step by step instructions for using the complete code package on a HP-1000 system. This code is designed to calculate response functions of NaI gamma-ray detectors, with cylindrical or rectangular geometries.

  16. Lee weight enumerators of self-dual codes and theta functions

    NARCIS (Netherlands)

    Asch, van A.G.; Martens, F.J.L.

    2008-01-01

    The theory of modular forms, in particular theta functions, and coding theory are in a remarkable way connected. The connection is established by defining a suitable lattice corresponding to the given code, and considering its theta function. First we define some special theta functions, and

  17. Development and in vitro characterization of 5-flurouracilloaded ...

    African Journals Online (AJOL)

    Purpose: To prepare chondroitin sulphate–polyvinyl alcohol cross-linked microcapsules (miCAPs) for controlled delivery of 5-flurouracil (5-FU) in cancer patients. Method: Nine different miCAP formulations were prepared using emulsion cross-linking procedure. The formulations were evaluated for their physicochemical ...

  18. Arbitrariness is not enough: towards a functional approach to the genetic code.

    Science.gov (United States)

    Lacková, Ľudmila; Matlach, Vladimír; Faltýnek, Dan

    2017-12-01

    Arbitrariness in the genetic code is one of the main reasons for a linguistic approach to molecular biology: the genetic code is usually understood as an arbitrary relation between amino acids and nucleobases. However, from a semiotic point of view, arbitrariness should not be the only condition for definition of a code, consequently it is not completely correct to talk about "code" in this case. Yet we suppose that there exist a code in the process of protein synthesis, but on a higher level than the nucleic bases chains. Semiotically, a code should be always associated with a function and we propose to define the genetic code not only relationally (in basis of relation between nucleobases and amino acids) but also in terms of function (function of a protein as meaning of the code). Even if the functional definition of meaning in the genetic code has been discussed in the field of biosemiotics, its further implications have not been considered. In fact, if the function of a protein represents the meaning of the genetic code (the sign's object), then it is crucial to reconsider the notion of its expression (the sign) as well. In our contribution, we will show that the actual model of the genetic code is not the only possible and we will propose a more appropriate model from a semiotic point of view.

  19. Functional interrogation of non-coding DNA through CRISPR genome editing.

    Science.gov (United States)

    Canver, Matthew C; Bauer, Daniel E; Orkin, Stuart H

    2017-05-15

    Methodologies to interrogate non-coding regions have lagged behind coding regions despite comprising the vast majority of the genome. However, the rapid evolution of clustered regularly interspaced short palindromic repeats (CRISPR)-based genome editing has provided a multitude of novel techniques for laboratory investigation including significant contributions to the toolbox for studying non-coding DNA. CRISPR-mediated loss-of-function strategies rely on direct disruption of the underlying sequence or repression of transcription without modifying the targeted DNA sequence. CRISPR-mediated gain-of-function approaches similarly benefit from methods to alter the targeted sequence through integration of customized sequence into the genome as well as methods to activate transcription. Here we review CRISPR-based loss- and gain-of-function techniques for the interrogation of non-coding DNA. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Authentication codes from ε-ASU hash functions with partially secret keys

    NARCIS (Netherlands)

    Liu, S.L.; Tilborg, van H.C.A.; Weng, J.; Chen, Kefei

    2014-01-01

    An authentication code can be constructed with a family of e-Almost strong universal (e-ASU) hash functions, with the index of hash functions as the authentication key. This paper considers the performance of authentication codes from e-ASU, when the authentication key is only partially secret. We

  1. Evolutionary analysis reveals regulatory and functional landscape of coding and non-coding RNA editing.

    Science.gov (United States)

    Zhang, Rui; Deng, Patricia; Jacobson, Dionna; Li, Jin Billy

    2017-02-01

    Adenosine-to-inosine RNA editing diversifies the transcriptome and promotes functional diversity, particularly in the brain. A plethora of editing sites has been recently identified; however, how they are selected and regulated and which are functionally important are largely unknown. Here we show the cis-regulation and stepwise selection of RNA editing during Drosophila evolution and pinpoint a large number of functional editing sites. We found that the establishment of editing and variation in editing levels across Drosophila species are largely explained and predicted by cis-regulatory elements. Furthermore, editing events that arose early in the species tree tend to be more highly edited in clusters and enriched in slowly-evolved neuronal genes, thus suggesting that the main role of RNA editing is for fine-tuning neurological functions. While nonsynonymous editing events have been long recognized as playing a functional role, in addition to nonsynonymous editing sites, a large fraction of 3'UTR editing sites is evolutionarily constrained, highly edited, and thus likely functional. We find that these 3'UTR editing events can alter mRNA stability and affect miRNA binding and thus highlight the functional roles of noncoding RNA editing. Our work, through evolutionary analyses of RNA editing in Drosophila, uncovers novel insights of RNA editing regulation as well as its functions in both coding and non-coding regions.

  2. Performance of the dot product function in radiative transfer code SORD

    Science.gov (United States)

    Korkin, Sergey; Lyapustin, Alexei; Sinyuk, Aliaksandr; Holben, Brent

    2016-10-01

    The successive orders of scattering radiative transfer (RT) codes frequently call the scalar (dot) product function. In this paper, we study performance of some implementations of the dot product in the RT code SORD using 50 scenarios for light scattering in the atmosphere-surface system. In the dot product function, we use the unrolled loops technique with different unrolling factor. We also considered the intrinsic Fortran functions. We show results for two machines: ifort compiler under Windows, and pgf90 under Linux. Intrinsic DOT_PRODUCT function showed best performance for the ifort. For the pgf90, the dot product implemented with unrolling factor 4 was the fastest. The RT code SORD together with the interface that runs all the mentioned tests are publicly available from ftp://maiac.gsfc.nasa.gov/pub/skorkin/SORD_IP_16B (current release) or by email request from the corresponding (first) author.

  3. Enforcing the use of API functions in Linux code

    DEFF Research Database (Denmark)

    Lawall, Julia; Muller, Gilles; Palix, Nicolas Jean-Michel

    2009-01-01

    In the Linux kernel source tree, header files typically define many small functions that have a simple behavior but are critical to ensure readability, correctness, and maintainability. We have observed, however, that some Linux code does not use these functions systematically. In this paper, we...... in the header file include/linux/usb.h....

  4. Vectorial Resilient PC(l) of Order k Boolean Functions from AG-Codes

    Institute of Scientific and Technical Information of China (English)

    Hao CHEN; Liang MA; Jianhua LI

    2011-01-01

    Propagation criteria and resiliency of vectorial Boolean functions are important for cryptographic purpose (see [1- 4, 7, 8, 10, 11, 16]). Kurosawa, Stoh [8] and Carlet [1]gave a construction of Boolean functions satisfying PC(l) of order k from binary linear or nonlinear codes. In this paper, the algebraic-geometric codes over GF(2m) are used to modify the Carlet and Kurosawa-Satoh's construction for giving vectorial resilient Boolean functions satisfying PC(l) of order k criterion. This new construction is compared with previously known results.

  5. Simplifying the parallelization of scientific codes by a function-centric approach in Python

    International Nuclear Information System (INIS)

    Nilsen, Jon K; Cai Xing; Langtangen, Hans Petter; Hoeyland, Bjoern

    2010-01-01

    The purpose of this paper is to show how existing scientific software can be parallelized using a separate thin layer of Python code where all parallelization-specific tasks are implemented. We provide specific examples of such a Python code layer, which can act as templates for parallelizing a wide set of serial scientific codes. The use of Python for parallelization is motivated by the fact that the language is well suited for reusing existing serial codes programmed in other languages. The extreme flexibility of Python with regard to handling functions makes it very easy to wrap up decomposed computational tasks of a serial scientific application as Python functions. Many parallelization-specific components can be implemented as generic Python functions, which may take as input those wrapped functions that perform concrete computational tasks. The overall programming effort needed by this parallelization approach is limited, and the resulting parallel Python scripts have a compact and clean structure. The usefulness of the parallelization approach is exemplified by three different classes of application in natural and social sciences.

  6. Examination of wall functions for a Parabolized Navier-Stokes code for supersonic flow

    Energy Technology Data Exchange (ETDEWEB)

    Alsbrooks, T.H. [New Mexico Univ., Albuquerque, NM (United States). Dept. of Mechanical Engineering

    1993-04-01

    Solutions from a Parabolized Navier-Stokes (PNS) code with an algebraic turbulence model are compared with wall functions. The wall functions represent the turbulent flow profiles in the viscous sublayer, thus removing many grid points from the solution procedure. The wall functions are intended to replace the computed profiles between the body surface and a match point in the logarithmic region. A supersonic adiabatic flow case was examined first. This adiabatic case indicates close agreement between computed velocity profiles near the wall and the wall function for a limited range of suitable match points in the logarithmic region. In an attempt to improve marching stability, a laminar to turbulent transition routine was implemented at the start of the PNS code. Implementing the wall function with the transitional routine in the PNS code is expected to reduce computational time while maintaining good accuracy in computed skin friction.

  7. Examination of wall functions for a Parabolized Navier-Stokes code for supersonic flow

    Energy Technology Data Exchange (ETDEWEB)

    Alsbrooks, T.H. (New Mexico Univ., Albuquerque, NM (United States). Dept. of Mechanical Engineering)

    1993-01-01

    Solutions from a Parabolized Navier-Stokes (PNS) code with an algebraic turbulence model are compared with wall functions. The wall functions represent the turbulent flow profiles in the viscous sublayer, thus removing many grid points from the solution procedure. The wall functions are intended to replace the computed profiles between the body surface and a match point in the logarithmic region. A supersonic adiabatic flow case was examined first. This adiabatic case indicates close agreement between computed velocity profiles near the wall and the wall function for a limited range of suitable match points in the logarithmic region. In an attempt to improve marching stability, a laminar to turbulent transition routine was implemented at the start of the PNS code. Implementing the wall function with the transitional routine in the PNS code is expected to reduce computational time while maintaining good accuracy in computed skin friction.

  8. Stochastic methods for uncertainty treatment of functional variables in computer codes: application to safety studies

    International Nuclear Information System (INIS)

    Nanty, Simon

    2015-01-01

    This work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called co-variate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or meta model, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the meta model. Finally, a new approximation approach for expensive codes with functional outputs has been

  9. Code REX to fit experimental data to exponential functions and graphics plotting

    International Nuclear Information System (INIS)

    Romero, L.; Travesi, A.

    1983-01-01

    The REX code, written in Fortran IV, performs the fitting a set of experimental data to different kind of functions as: straight-line (Y = A + BX) , and various exponential type (Y-A B x , Y=A X B ; Y=A exp(BX) ) , using the Least Squares criterion. Such fitting could be done directly for one selected function of for the our simultaneously and allows to chose the function that best fitting to the data, since presents the statistics data of all the fitting. Further, it presents the graphics plotting, of the fitted function, in the appropriate coordinate axes system. An additional option allows also the Graphic plotting of experimental data used for the fitting. All the data necessary to execute this code are asked to the operator in the terminal screen, in the iterative way by screen-operator dialogue, and the values are introduced through the keyboard. This code could be executed with any computer provided with graphic screen and keyboard terminal, with a X-Y plotter serial connected to the graphics terminal. (Author) 5 refs

  10. Programming a real code in a functional language (part 1)

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, C.P.

    1991-09-10

    For some, functional languages hold the promise of allowing ease of programming massively parallel computers that imperative languages such as Fortran and C do not offer. At LLNL, we have initiated a project to write the physics of a major production code in Sisal, a functional language developed at LLNL in collaboration with researchers throughout the world. We are investigating the expressibility of Sisal, as well as its performance on a shared-memory multiprocessor, the Y-MP. An interesting aspect of the project is that Sisal modules can call Fortran modules, and are callable by them. This eliminates the rewriting of 80% of the production code that would not benefit from parallel execution. Preliminary results indicate that the restrictive nature of the language does not cause problems in expressing the algorithms we have chosen. Some interesting aspects of programming in a mixed functional-imperative environment have surfaced, but can be managed. 8 refs.

  11. Long non-coding RNAs: Mechanism of action and functional utility

    Directory of Open Access Journals (Sweden)

    Shakil Ahmad Bhat

    2016-10-01

    Full Text Available Recent RNA sequencing studies have revealed that most of the human genome is transcribed, but very little of the total transcriptomes has the ability to encode proteins. Long non-coding RNAs (lncRNAs are non-coding transcripts longer than 200 nucleotides. Members of the non-coding genome include microRNA (miRNA, small regulatory RNAs and other short RNAs. Most of long non-coding RNA (lncRNAs are poorly annotated. Recent recognition about lncRNAs highlights their effects in many biological and pathological processes. LncRNAs are dysfunctional in a variety of human diseases varying from cancerous to non-cancerous diseases. Characterization of these lncRNA genes and their modes of action may allow their use for diagnosis, monitoring of progression and targeted therapies in various diseases. In this review, we summarize the functional perspectives as well as the mechanism of action of lncRNAs. Keywords: LncRNA, X-chromosome inactivation, Genome imprinting, Transcription regulation, Cancer, Immunity

  12. Ontological function annotation of long non-coding RNAs through hierarchical multi-label classification.

    Science.gov (United States)

    Zhang, Jingpu; Zhang, Zuping; Wang, Zixiang; Liu, Yuting; Deng, Lei

    2018-05-15

    Long non-coding RNAs (lncRNAs) are an enormous collection of functional non-coding RNAs. Over the past decades, a large number of novel lncRNA genes have been identified. However, most of the lncRNAs remain function uncharacterized at present. Computational approaches provide a new insight to understand the potential functional implications of lncRNAs. Considering that each lncRNA may have multiple functions and a function may be further specialized into sub-functions, here we describe NeuraNetL2GO, a computational ontological function prediction approach for lncRNAs using hierarchical multi-label classification strategy based on multiple neural networks. The neural networks are incrementally trained level by level, each performing the prediction of gene ontology (GO) terms belonging to a given level. In NeuraNetL2GO, we use topological features of the lncRNA similarity network as the input of the neural networks and employ the output results to annotate the lncRNAs. We show that NeuraNetL2GO achieves the best performance and the overall advantage in maximum F-measure and coverage on the manually annotated lncRNA2GO-55 dataset compared to other state-of-the-art methods. The source code and data are available at http://denglab.org/NeuraNetL2GO/. leideng@csu.edu.cn. Supplementary data are available at Bioinformatics online.

  13. Sparse coding reveals greater functional connectivity in female brains during naturalistic emotional experience.

    Directory of Open Access Journals (Sweden)

    Yudan Ren

    Full Text Available Functional neuroimaging is widely used to examine changes in brain function associated with age, gender or neuropsychiatric conditions. FMRI (functional magnetic resonance imaging studies employ either laboratory-designed tasks that engage the brain with abstracted and repeated stimuli, or resting state paradigms with little behavioral constraint. Recently, novel neuroimaging paradigms using naturalistic stimuli are gaining increasing attraction, as they offer an ecologically-valid condition to approximate brain function in real life. Wider application of naturalistic paradigms in exploring individual differences in brain function, however, awaits further advances in statistical methods for modeling dynamic and complex dataset. Here, we developed a novel data-driven strategy that employs group sparse representation to assess gender differences in brain responses during naturalistic emotional experience. Comparing to independent component analysis (ICA, sparse coding algorithm considers the intrinsic sparsity of neural coding and thus could be more suitable in modeling dynamic whole-brain fMRI signals. An online dictionary learning and sparse coding algorithm was applied to the aggregated fMRI signals from both groups, which was subsequently factorized into a common time series signal dictionary matrix and the associated weight coefficient matrix. Our results demonstrate that group sparse representation can effectively identify gender differences in functional brain network during natural viewing, with improved sensitivity and reliability over ICA-based method. Group sparse representation hence offers a superior data-driven strategy for examining brain function during naturalistic conditions, with great potential for clinical application in neuropsychiatric disorders.

  14. A nodal Grean's function method of reactor core fuel management code, NGCFM2D

    International Nuclear Information System (INIS)

    Li Dongsheng; Yao Dong.

    1987-01-01

    This paper presents the mathematical model and program structure of the nodal Green's function method of reactor core fuel management code, NGCFM2D. Computing results of some reactor cores by NGCFM2D are analysed and compared with other codes

  15. Computational Approaches Reveal New Insights into Regulation and Function of Non; coding RNAs and their Targets

    KAUST Repository

    Alam, Tanvir

    2016-01-01

    Regulation and function of protein-coding genes are increasingly well-understood, but no comparable evidence exists for non-coding RNA (ncRNA) genes, which appear to be more numerous than protein-coding genes. We developed a novel machine

  16. Detecting non-coding selective pressure in coding regions

    Directory of Open Access Journals (Sweden)

    Blanchette Mathieu

    2007-02-01

    Full Text Available Abstract Background Comparative genomics approaches, where orthologous DNA regions are compared and inter-species conserved regions are identified, have proven extremely powerful for identifying non-coding regulatory regions located in intergenic or intronic regions. However, non-coding functional elements can also be located within coding region, as is common for exonic splicing enhancers, some transcription factor binding sites, and RNA secondary structure elements affecting mRNA stability, localization, or translation. Since these functional elements are located in regions that are themselves highly conserved because they are coding for a protein, they generally escaped detection by comparative genomics approaches. Results We introduce a comparative genomics approach for detecting non-coding functional elements located within coding regions. Codon evolution is modeled as a mixture of codon substitution models, where each component of the mixture describes the evolution of codons under a specific type of coding selective pressure. We show how to compute the posterior distribution of the entropy and parsimony scores under this null model of codon evolution. The method is applied to a set of growth hormone 1 orthologous mRNA sequences and a known exonic splicing elements is detected. The analysis of a set of CORTBP2 orthologous genes reveals a region of several hundred base pairs under strong non-coding selective pressure whose function remains unknown. Conclusion Non-coding functional elements, in particular those involved in post-transcriptional regulation, are likely to be much more prevalent than is currently known. With the numerous genome sequencing projects underway, comparative genomics approaches like that proposed here are likely to become increasingly powerful at detecting such elements.

  17. Extracellular vesicle associated long non-coding RNAs functionally enhance cell viability

    Directory of Open Access Journals (Sweden)

    Chris Hewson

    2016-10-01

    Full Text Available Cells communicate with one another to create microenvironments and share resources. One avenue by which cells communicate is through the action of exosomes. Exosomes are extracellular vesicles that are released by one cell and taken up by neighbouring cells. But how exosomes instigate communication between cells has remained largely unknown. We present evidence here that particular long non-coding RNA molecules are preferentially packaged into exosomes. We also find that a specific class of these exosome associated non-coding RNAs functionally modulate cell viability by direct interactions with l-lactate dehydrogenase B (LDHB, high-mobility group protein 17 (HMG-17, and CSF2RB, proteins involved in metabolism, nucleosomal architecture and cell signalling respectively. Knowledge of this endogenous cell to cell pathway, those proteins interacting with exosome associated non-coding transcripts and their interacting domains, could lead to a better understanding of not only cell to cell interactions but also the development of exosome targeted approaches in patient specific cell-based therapies. Keywords: Non-coding RNA, Extracellular RNA, Exosomes, Retroelement, Pseudogene

  18. Plato: A localised orbital based density functional theory code

    Science.gov (United States)

    Kenny, S. D.; Horsfield, A. P.

    2009-12-01

    The Plato package allows both orthogonal and non-orthogonal tight-binding as well as density functional theory (DFT) calculations to be performed within a single framework. The package also provides extensive tools for analysing the results of simulations as well as a number of tools for creating input files. The code is based upon the ideas first discussed in Sankey and Niklewski (1989) [1] with extensions to allow high-quality DFT calculations to be performed. DFT calculations can utilise either the local density approximation or the generalised gradient approximation. Basis sets from minimal basis through to ones containing multiple radial functions per angular momenta and polarisation functions can be used. Illustrations of how the package has been employed are given along with instructions for its utilisation. Program summaryProgram title: Plato Catalogue identifier: AEFC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 219 974 No. of bytes in distributed program, including test data, etc.: 1 821 493 Distribution format: tar.gz Programming language: C/MPI and PERL Computer: Apple Macintosh, PC, Unix machines Operating system: Unix, Linux and Mac OS X Has the code been vectorised or parallelised?: Yes, up to 256 processors tested RAM: Up to 2 Gbytes per processor Classification: 7.3 External routines: LAPACK, BLAS and optionally ScaLAPACK, BLACS, PBLAS, FFTW Nature of problem: Density functional theory study of electronic structure and total energies of molecules, crystals and surfaces. Solution method: Localised orbital based density functional theory. Restrictions: Tight-binding and density functional theory only, no exact exchange. Unusual features: Both atom centred and uniform meshes available

  19. Locating protein-coding sequences under selection for additional, overlapping functions in 29 mammalian genomes

    DEFF Research Database (Denmark)

    Lin, Michael F; Kheradpour, Pouya; Washietl, Stefan

    2011-01-01

    conservation compared to typical protein-coding genes—especially at synonymous sites. In this study, we use genome alignments of 29 placental mammals to systematically locate short regions within human ORFs that show conspicuously low estimated rates of synonymous substitution across these species. The 29......-species alignment provides statistical power to locate more than 10,000 such regions with resolution down to nine-codon windows, which are found within more than a quarter of all human protein-coding genes and contain ~2% of their synonymous sites. We collect numerous lines of evidence that the observed...... synonymous constraint in these regions reflects selection on overlapping functional elements including splicing regulatory elements, dual-coding genes, RNA secondary structures, microRNA target sites, and developmental enhancers. Our results show that overlapping functional elements are common in mammalian...

  20. 18 CFR Table 1 to Part 301 - Functionalization and Escalation Codes

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Functionalization and Escalation Codes 1 Table 1 to Part 301 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST...

  1. A Radiation Chemistry Code Based on the Green's Function of the Diffusion Equation

    Science.gov (United States)

    Plante, Ianik; Wu, Honglu

    2014-01-01

    Stochastic radiation track structure codes are of great interest for space radiation studies and hadron therapy in medicine. These codes are used for a many purposes, notably for microdosimetry and DNA damage studies. In the last two decades, they were also used with the Independent Reaction Times (IRT) method in the simulation of chemical reactions, to calculate the yield of various radiolytic species produced during the radiolysis of water and in chemical dosimeters. Recently, we have developed a Green's function based code to simulate reversible chemical reactions with an intermediate state, which yielded results in excellent agreement with those obtained by using the IRT method. This code was also used to simulate and the interaction of particles with membrane receptors. We are in the process of including this program for use with the Monte-Carlo track structure code Relativistic Ion Tracks (RITRACKS). This recent addition should greatly expand the capabilities of RITRACKS, notably to simulate DNA damage by both the direct and indirect effect.

  2. New MoM code incorporating multiple domain basis functions

    CSIR Research Space (South Africa)

    Lysko, AA

    2011-08-01

    Full Text Available piecewise linear approximation of geometry. This often leads to an unnecessarily great number of unknowns used to model relatively small loop and spiral antennas, coils and other curved structures. This is because the program creates a dense mesh... to accelerate computation of the elements of the impedance matrix and showed acceleration factor exceeding an order of magnitude, subject to a high accuracy requirement. 3. On Code Functionality and Application Results The package of programs was written...

  3. FCG: a code generator for lazy functional languages

    NARCIS (Netherlands)

    Kastens, U.; Langendoen, K.G.; Hartel, Pieter H.; Pfahler, P.

    1992-01-01

    The FCGcode generator produces portable code that supports efficient two-space copying garbage collection. The code generator transforms the output of the FAST compiler front end into an abstract machine code. This code explicitly uses a call stack, which is accessible to the garbage collector. In

  4. Strict optical orthogonal codes for purely asynchronous code-division multiple-access applications

    Science.gov (United States)

    Zhang, Jian-Guo

    1996-12-01

    Strict optical orthogonal codes are presented for purely asynchronous optical code-division multiple-access (CDMA) applications. The proposed code can strictly guarantee the peaks of its cross-correlation functions and the sidelobes of any of its autocorrelation functions to have a value of 1 in purely asynchronous data communications. The basic theory of the proposed codes is given. An experiment on optical CDMA systems is also demonstrated to verify the characteristics of the proposed code.

  5. Study regarding the density evolution of messages and the characteristic functions associated of a LDPC code

    Science.gov (United States)

    Drăghici, S.; Proştean, O.; Răduca, E.; Haţiegan, C.; Hălălae, I.; Pădureanu, I.; Nedeloni, M.; (Barboni Haţiegan, L.

    2017-01-01

    In this paper a method with which a set of characteristic functions are associated to a LDPC code is shown and also functions that represent the evolution density of messages that go along the edges of a Tanner graph. Graphic representations of the density evolution are shown respectively the study and simulation of likelihood threshold that render asymptotic boundaries between which there are decodable codes were made using MathCad V14 software.

  6. Improved response function calculations for scintillation detectors using an extended version of the MCNP code

    CERN Document Server

    Schweda, K

    2002-01-01

    The analysis of (e,e'n) experiments at the Darmstadt superconducting electron linear accelerator S-DALINAC required the calculation of neutron response functions for the NE213 liquid scintillation detectors used. In an open geometry, these response functions can be obtained using the Monte Carlo codes NRESP7 and NEFF7. However, for more complex geometries, an extended version of the Monte Carlo code MCNP exists. This extended version of the MCNP code was improved upon by adding individual light-output functions for charged particles. In addition, more than one volume can be defined as a scintillator, thus allowing the simultaneous calculation of the response for multiple detector setups. With the implementation of sup 1 sup 2 C(n,n'3 alpha) reactions, all relevant reactions for neutron energies E sub n <20 MeV are now taken into consideration. The results of these calculations were compared to experimental data using monoenergetic neutrons in an open geometry and a sup 2 sup 5 sup 2 Cf neutron source in th...

  7. Roles, Functions, and Mechanisms of Long Non-coding RNAs in Cancer

    Directory of Open Access Journals (Sweden)

    Yiwen Fang

    2016-02-01

    Full Text Available Long non-coding RNAs (lncRNAs play important roles in cancer. They are involved in chromatin remodeling, as well as transcriptional and post-transcriptional regulation, through a variety of chromatin-based mechanisms and via cross-talk with other RNA species. lncRNAs can function as decoys, scaffolds, and enhancer RNAs. This review summarizes the characteristics of lncRNAs, including their roles, functions, and working mechanisms, describes methods for identifying and annotating lncRNAs, and discusses future opportunities for lncRNA-based therapies using antisense oligonucleotides.

  8. A comparison of two nodal codes : Advanced nodal code (ANC) and analytic function expansion nodal (AFEN) code

    International Nuclear Information System (INIS)

    Chung, S.K.; Hah, C.J.; Lee, H.C.; Kim, Y.H.; Cho, N.Z.

    1996-01-01

    Modern nodal methods usually employs the transverse integration technique in order to reduce a multi-dimensional diffusion equation to one-dimensional diffusion equations. The use of the transverse integration technique requires two major approximations such as a transverse leakage approximation and a one-dimensional flux approximation. Both the transverse leakage and the one-dimensional flux are approximated by polynomials. ANC (Advanced Nodal Code) developed by Westinghouse employs a modern nodal expansion method for the flux calculation, the equivalence theory for the homogenization error reduction and a group theory for pin power recovery. Unlike the conventional modern nodal methods, AFEN (Analytic Function Expansion Nodal) method expands homogeneous flux distributions within a node into non-separable analytic basis functions, which eliminate two major approximations of the modern nodal methods. A comparison study of AFEN with ANC has been performed to see the applicability of AFEN to commercial PWR and different types of reactors such as MOX fueled reactor. The qualification comparison results demonstrate that AFEN methodology is accurate enough to apply for commercial PWR analysis. The results show that AFEN provides very accurate results (core multiplication factor and assembly power distribution) for cores that exhibit strong flux gradients as in a MOX loaded core. (author)

  9. Phonological coding during reading.

    Science.gov (United States)

    Leinenger, Mallorie

    2014-11-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  10. Development of NRESP98 Monte Carlo codes for the calculation of neutron response functions of neutron detectors. Calculation of the response function of spherical BF{sub 3} proportional counter

    Energy Technology Data Exchange (ETDEWEB)

    Hashimoto, M.; Saito, K.; Ando, H. [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center

    1998-05-01

    The method to calculate the response function of spherical BF{sub 3} proportional counter, which is commonly used as neutron dose rate meter and neutron spectrometer with multi moderator system, is developed. As the calculation code for evaluating the response function, the existing code series NRESP, the Monte Carlo code for the calculation of response function of neutron detectors, is selected. However, the application scope of the existing NRESP is restricted, the NRESP98 is tuned as generally applicable code, with expansion of the geometrical condition, the applicable element, etc. The NRESP98 is tested with the response function of the spherical BF{sub 3} proportional counter. Including the effect of the distribution of amplification factor, the detailed evaluation of the charged particle transportation and the effect of the statistical distribution, the result of NRESP98 calculation fit the experience within {+-}10%. (author)

  11. Functions of Code-Switching among Iranian Advanced and Elementary Teachers and Students

    Science.gov (United States)

    Momenian, Mohammad; Samar, Reza Ghafar

    2011-01-01

    This paper reports on the findings of a study carried out on the advanced and elementary teachers' and students' functions and patterns of code-switching in Iranian English classrooms. This concept has not been adequately examined in L2 (second language) classroom contexts than in outdoor natural contexts. Therefore, besides reporting on the…

  12. QR Codes 101

    Science.gov (United States)

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  13. Nifty Native Implemented Functions: low-level meets high-level code

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Erlang Native Implemented Functions (NIFs) allow developers to implement functions in C (or C++) rather than Erlang. NIFs are useful for integrating high performance or legacy code in Erlang applications. The talk will cover how to implement NIFs, use cases, and common pitfalls when employing them. Further, we will discuss how and why Erlang applications, such as Riak, use NIFs. About the speaker Ian Plosker is the Technical Lead, International Operations at Basho Technologies, the makers of the open source database Riak. He has been developing software professionally for 10 years and programming since childhood. Prior to working at Basho, he developed everything from CMS to bioinformatics platforms to corporate competitive intelligence management systems. At Basho, he's been helping customers be incredibly successful using Riak.

  14. Blahut-Arimoto algorithm and code design for action-dependent source coding problems

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Simeone, Osvaldo; Popovski, Petar

    2013-01-01

    The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient Blahut-Arimoto-type algorithm for the numerical computation of the rate-distortion-cost function...... for this problem is proposed. Moreover, a simplified two-stage code structure based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical Wyner-Ziv codes, one for each action. Leveraging this structure, specific coding/decoding...... strategies are designed based on LDGM codes and message passing. Through numerical examples, the proposed code design is shown to achieve performance close to the rate-distortion-cost function....

  15. Probe code: a set of programs for processing and analysis of the left ventricular function - User's manual

    International Nuclear Information System (INIS)

    Piva, R.M.V.

    1987-01-01

    The User's Manual of the Probe Code is an addendum to the M.Sc. thesis entitled A Microcomputer System of Nuclear Probe to Check the Left Ventricular Function. The Probe Code is a software which was developed for processing and off-line analysis curves from the Left Ventricular Function, that were obtained in vivo. These curves are produced by means of an external scintigraph probe, which was collimated and put on the left ventricule, after a venous inoculation of Tc-99 m. (author)

  16. Functional Diets Modulate lncRNA-Coding RNAs and Gene Interactions in the Intestine of Rainbow Trout Oncorhynchus mykiss.

    Science.gov (United States)

    Núñez-Acuña, Gustavo; Détrée, Camille; Gallardo-Escárate, Cristian; Gonçalves, Ana Teresa

    2017-06-01

    The advent of functional genomics has sparked the interest in inferring the function of non-coding regions from the transcriptome in non-model species. However, numerous biological processes remain understudied from this perspective, including intestinal immunity in farmed fish. The aim of this study was to infer long non-coding RNA (lncRNAs) expression profiles in rainbow trout (Oncorhynchus mykiss) fed for 30 days with functional diets based on pre- and probiotics. For this, whole transcriptome sequencing was conducted through Illumina technology, and lncRNAs were mined to evaluate transcriptional activity in conjunction with known protein sequences. To detect differentially expressed transcripts, 880 novels and 9067 previously described O. mykiss lncRNAs were used. Expression levels and genome co-localization correlations with coding genes were also analyzed. Significant differences in gene expression were primarily found in the probiotic diet, which had a twofold downregulation of lncRNAs compared to other treatments. Notable differences by diet were also evidenced between the coding genes of distinct metabolic processes. In contrast, genome co-localization of lncRNAs with coding genes was similar for all diets. This study contributes novel knowledge regarding lncRNAs in fish, suggesting key roles in salmons fed with in-feed additives with the capacity to modulate the intestinal homeostasis and host health.

  17. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  18. Packing simulation code to calculate distribution function of hard spheres by Monte Carlo method : MCRDF

    International Nuclear Information System (INIS)

    Murata, Isao; Mori, Takamasa; Nakagawa, Masayuki; Shirai, Hiroshi.

    1996-03-01

    High Temperature Gas-cooled Reactors (HTGRs) employ spherical fuels named coated fuel particles (CFPs) consisting of a microsphere of low enriched UO 2 with coating layers in order to prevent FP release. There exist many spherical fuels distributed randomly in the cores. Therefore, the nuclear design of HTGRs is generally performed on the basis of the multigroup approximation using a diffusion code, S N transport code or group-wise Monte Carlo code. This report summarizes a Monte Carlo hard sphere packing simulation code to simulate the packing of equal hard spheres and evaluate the necessary probability distribution of them, which is used for the application of the new Monte Carlo calculation method developed to treat randomly distributed spherical fuels with the continuous energy Monte Carlo method. By using this code, obtained are the various statistical values, namely Radial Distribution Function (RDF), Nearest Neighbor Distribution (NND), 2-dimensional RDF and so on, for random packing as well as ordered close packing of FCC and BCC. (author)

  19. DLLExternalCode

    Energy Technology Data Exchange (ETDEWEB)

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  20. RETRANS - A tool to verify the functional equivalence of automatically generated source code with its specification

    International Nuclear Information System (INIS)

    Miedl, H.

    1998-01-01

    Following the competent technical standards (e.g. IEC 880) it is necessary to verify each step in the development process of safety critical software. This holds also for the verification of automatically generated source code. To avoid human errors during this verification step and to limit the cost effort a tool should be used which is developed independently from the development of the code generator. For this purpose ISTec has developed the tool RETRANS which demonstrates the functional equivalence of automatically generated source code with its underlying specification. (author)

  1. Functional interplay of top-down attention with affective codes during visual short-term memory maintenance.

    Science.gov (United States)

    Kuo, Bo-Cheng; Lin, Szu-Hung; Yeh, Yei-Yu

    2018-06-01

    Visual short-term memory (VSTM) allows individuals to briefly maintain information over time for guiding behaviours. Because the contents of VSTM can be neutral or emotional, top-down influence in VSTM may vary with the affective codes of maintained representations. Here we investigated the neural mechanisms underlying the functional interplay of top-down attention with affective codes in VSTM using functional magnetic resonance imaging. Participants were instructed to remember both threatening and neutral objects in a cued VSTM task. Retrospective cues (retro-cues) were presented to direct attention to the hemifield of a threatening object (i.e., cue-to-threat) or a neutral object (i.e., cue-to-neutral) during VSTM maintenance. We showed stronger activity in the ventral occipitotemporal cortex and amygdala for attending threatening relative to neutral representations. Using multivoxel pattern analysis, we found better classification performance for cue-to-threat versus cue-to-neutral objects in early visual areas and in the amygdala. Importantly, retro-cues modulated the strength of functional connectivity between the frontoparietal and early visual areas. Activity in the frontoparietal areas became strongly correlated with the activity in V3a-V4 coding the threatening representations instructed to be relevant for the task. Together, these findings provide the first demonstration of top-down modulation of activation patterns in early visual areas and functional connectivity between the frontoparietal network and early visual areas for regulating threatening representations during VSTM maintenance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Parents' Assessments of Disability in Their Children Using World Health Organization International Classification of Functioning, Disability and Health, Child and Youth Version Joined Body Functions and Activity Codes Related to Everyday Life.

    Science.gov (United States)

    Illum, Niels Ove; Gradel, Kim Oren

    2017-01-01

    To help parents assess disability in their own children using World Health Organization (WHO) International Classification of Functioning, Disability and Health, Child and Youth Version (ICF-CY) code qualifier scoring and to assess the validity and reliability of the data sets obtained. Parents of 162 children with spina bifida, spinal muscular atrophy, muscular disorders, cerebral palsy, visual impairment, hearing impairment, mental disability, or disability following brain tumours performed scoring for 26 body functions qualifiers (b codes) and activities and participation qualifiers (d codes). Scoring was repeated after 6 months. Psychometric and Rasch data analysis was undertaken. The initial and repeated data had Cronbach α of 0.96 and 0.97, respectively. Inter-code correlation was 0.54 (range: 0.23-0.91) and 0.76 (range: 0.20-0.92). The corrected code-total correlations were 0.72 (range: 0.49-0.83) and 0.75 (range: 0.50-0.87). When repeated, the ICF-CY code qualifier scoring showed a correlation R of 0.90. Rasch analysis of the selected ICF-CY code data demonstrated a mean measure of 0.00 and 0.00, respectively. Code qualifier infit mean square (MNSQ) had a mean of 1.01 and 1.00. The mean corresponding outfit MNSQ was 1.05 and 1.01. The ICF-CY code τ thresholds and category measures were continuous when assessed and reassessed by parents. Participating children had a mean of 56 codes scores (range: 26-130) before and a mean of 55.9 scores (range: 25-125) after repeat. Corresponding measures were -1.10 (range: -5.31 to 5.25) and -1.11 (range: -5.42 to 5.36), respectively. Based on measures obtained at the 2 occasions, the correlation coefficient R was 0.84. The child code map showed coherence of ICF-CY codes at each level. There was continuity in covering the range across disabilities. And, first and foremost, the distribution of codes reflexed a true continuity in disability with codes for motor functions activated first, then codes for cognitive functions

  3. Diagnostic Coding for Epilepsy.

    Science.gov (United States)

    Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R

    2016-02-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  4. Coding of Neuroinfectious Diseases.

    Science.gov (United States)

    Barkley, Gregory L

    2015-12-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  5. FARNA: knowledgebase of inferred functions of non-coding RNA transcripts

    KAUST Repository

    Alam, Tanvir

    2016-10-12

    Non-coding RNA (ncRNA) genes play a major role in control of heterogeneous cellular behavior. Yet, their functions are largely uncharacterized. Current available databases lack in-depth information of ncRNA functions across spectrum of various cells/tissues. Here, we present FARNA, a knowledgebase of inferred functions of 10,289 human ncRNA transcripts (2,734 microRNA and 7,555 long ncRNA) in 119 tissues and 177 primary cells of human. Since transcription factors (TFs) and TF co-factors (TcoFs) are crucial components of regulatory machinery for activation of gene transcription, cellular processes and diseases in which TFs and TcoFs are involved suggest functions of the transcripts they regulate. In FARNA, functions of a transcript are inferred from TFs and TcoFs whose genes co-express with the transcript controlled by these TFs and TcoFs in a considered cell/tissue. Transcripts were annotated using statistically enriched GO terms, pathways and diseases across cells/tissues based on guilt-by-association principle. Expression profiles across cells/tissues based on Cap Analysis of Gene Expression (CAGE) are provided. FARNA, having the most comprehensive function annotation of considered ncRNAs across widest spectrum of human cells/tissues, has a potential to greatly contribute to our understanding of ncRNA roles and their regulatory mechanisms in human. FARNA can be accessed at: http://cbrc.kaust.edu.sa/farna

  6. FARNA: knowledgebase of inferred functions of non-coding RNA transcripts

    KAUST Repository

    Alam, Tanvir; Uludag, Mahmut; Essack, Magbubah; Salhi, Adil; Ashoor, Haitham; Hanks, John B.; Kapfer, Craig Eric; Mineta, Katsuhiko; Gojobori, Takashi; Bajic, Vladimir B.

    2016-01-01

    Non-coding RNA (ncRNA) genes play a major role in control of heterogeneous cellular behavior. Yet, their functions are largely uncharacterized. Current available databases lack in-depth information of ncRNA functions across spectrum of various cells/tissues. Here, we present FARNA, a knowledgebase of inferred functions of 10,289 human ncRNA transcripts (2,734 microRNA and 7,555 long ncRNA) in 119 tissues and 177 primary cells of human. Since transcription factors (TFs) and TF co-factors (TcoFs) are crucial components of regulatory machinery for activation of gene transcription, cellular processes and diseases in which TFs and TcoFs are involved suggest functions of the transcripts they regulate. In FARNA, functions of a transcript are inferred from TFs and TcoFs whose genes co-express with the transcript controlled by these TFs and TcoFs in a considered cell/tissue. Transcripts were annotated using statistically enriched GO terms, pathways and diseases across cells/tissues based on guilt-by-association principle. Expression profiles across cells/tissues based on Cap Analysis of Gene Expression (CAGE) are provided. FARNA, having the most comprehensive function annotation of considered ncRNAs across widest spectrum of human cells/tissues, has a potential to greatly contribute to our understanding of ncRNA roles and their regulatory mechanisms in human. FARNA can be accessed at: http://cbrc.kaust.edu.sa/farna

  7. The functional role of long non-coding RNA in digestive system carcinomas.

    Science.gov (United States)

    Wang, Guang-Yu; Zhu, Yuan-Yuan; Zhang, Yan-Qiao

    2014-09-01

    In recent years, long non-coding RNAs (lncRNAs) are emerging as either oncogenes or tumor suppressor genes. Recent evidences suggest that lncRNAs play a very important role in digestive system carcinomas. However, the biological function of lncRNAs in the vast majority of digestive system carcinomas remains unclear. Recently, increasing studies has begun to explore their molecular mechanisms and regulatory networks that they are implicated in tumorigenesis. In this review, we highlight the emerging functional role of lncRNAs in digestive system carcinomas. It is becoming clear that lncRNAs will be exciting and potentially useful for diagnosis and treatment of digestive system carcinomas, some of these lncRNAs might function as both diagnostic markers and the treatment targets of digestive system carcinomas.

  8. A hybrid WDM/OCDMA ring with a dynamic add/drop function based on Fourier code for local area networks.

    Science.gov (United States)

    Choi, Yong-Kyu; Hosoya, Kenta; Lee, Chung Ghiu; Hanawa, Masanori; Park, Chang-Soo

    2011-03-28

    We propose and experimentally demonstrate a hybrid WDM/OCDMA ring with a dynamic add/drop function based on Fourier code for local area networks. Dynamic function is implemented by mechanically tuning the Fourier encoder/decoder for optical code division multiple access (OCDMA) encoding/decoding. Wavelength division multiplexing (WDM) is utilized for node assignment and 4-chip Fourier code recovers the matched signal from the codes. For an optical source well adapted to WDM channels and its short optical pulse generation, reflective semiconductor optical amplifiers (RSOAs) are used with a fiber Bragg grating (FBG) and gain-switched. To demonstrate we experimentally investigated a two-node hybrid WDM/OCDMA ring with a 4-chip Fourier encoder/decoder fabricated by cascading four FBGs with the bit error rate (BER) of <10(-9) for the node span of 10.64 km at 1.25 Gb/s.

  9. Design of tallying function for general purpose Monte Carlo particle transport code JMCT

    International Nuclear Information System (INIS)

    Shangguan Danhua; Li Gang; Deng Li; Zhang Baoyin

    2013-01-01

    A new postponed accumulation algorithm was proposed. Based on JCOGIN (J combinatorial geometry Monte Carlo transport infrastructure) framework and the postponed accumulation algorithm, the tallying function of the general purpose Monte Carlo neutron-photon transport code JMCT was improved markedly. JMCT gets a higher tallying efficiency than MCNP 4C by 28% for simple geometry model, and JMCT is faster than MCNP 4C by two orders of magnitude for complicated repeated structure model. The available ability of tallying function for JMCT makes firm foundation for reactor analysis and multi-step burnup calculation. (authors)

  10. Performance and complexity of tunable sparse network coding with gradual growing tuning functions over wireless networks

    OpenAIRE

    Garrido Ortiz, Pablo; Sørensen, Chres W.; Lucani Roetter, Daniel Enrique; Agüero Calvo, Ramón

    2016-01-01

    Random Linear Network Coding (RLNC) has been shown to be a technique with several benefits, in particular when applied over wireless mesh networks, since it provides robustness against packet losses. On the other hand, Tunable Sparse Network Coding (TSNC) is a promising concept, which leverages a trade-off between computational complexity and goodput. An optimal density tuning function has not been found yet, due to the lack of a closed-form expression that links density, performance and comp...

  11. Functions of Arabic-English Code-Switching: Sociolinguistic Insights from a Study Abroad Program

    Science.gov (United States)

    Al Masaeed, Khaled

    2013-01-01

    This sociolinguistic study examines the functions and motivations of code-switching, which is used here to mean the use of more than one language in the same conversation. The conversations studied here take place in a very particular context: one-on-one speaking sessions in a study abroad program in Morocco where English is the L1 and Arabic the…

  12. A human-specific de novo protein-coding gene associated with human brain functions.

    Directory of Open Access Journals (Sweden)

    Chuan-Yun Li

    2010-03-01

    Full Text Available To understand whether any human-specific new genes may be associated with human brain functions, we computationally screened the genetic vulnerable factors identified through Genome-Wide Association Studies and linkage analyses of nicotine addiction and found one human-specific de novo protein-coding gene, FLJ33706 (alternative gene symbol C20orf203. Cross-species analysis revealed interesting evolutionary paths of how this gene had originated from noncoding DNA sequences: insertion of repeat elements especially Alu contributed to the formation of the first coding exon and six standard splice junctions on the branch leading to humans and chimpanzees, and two subsequent substitutions in the human lineage escaped two stop codons and created an open reading frame of 194 amino acids. We experimentally verified FLJ33706's mRNA and protein expression in the brain. Real-Time PCR in multiple tissues demonstrated that FLJ33706 was most abundantly expressed in brain. Human polymorphism data suggested that FLJ33706 encodes a protein under purifying selection. A specifically designed antibody detected its protein expression across human cortex, cerebellum and midbrain. Immunohistochemistry study in normal human brain cortex revealed the localization of FLJ33706 protein in neurons. Elevated expressions of FLJ33706 were detected in Alzheimer's brain samples, suggesting the role of this novel gene in human-specific pathogenesis of Alzheimer's disease. FLJ33706 provided the strongest evidence so far that human-specific de novo genes can have protein-coding potential and differential protein expression, and be involved in human brain functions.

  13. An integrative approach to predicting the functional effects of small indels in non-coding regions of the human genome.

    Science.gov (United States)

    Ferlaino, Michael; Rogers, Mark F; Shihab, Hashem A; Mort, Matthew; Cooper, David N; Gaunt, Tom R; Campbell, Colin

    2017-10-06

    Small insertions and deletions (indels) have a significant influence in human disease and, in terms of frequency, they are second only to single nucleotide variants as pathogenic mutations. As the majority of mutations associated with complex traits are located outside the exome, it is crucial to investigate the potential pathogenic impact of indels in non-coding regions of the human genome. We present FATHMM-indel, an integrative approach to predict the functional effect, pathogenic or neutral, of indels in non-coding regions of the human genome. Our method exploits various genomic annotations in addition to sequence data. When validated on benchmark data, FATHMM-indel significantly outperforms CADD and GAVIN, state of the art models in assessing the pathogenic impact of non-coding variants. FATHMM-indel is available via a web server at indels.biocompute.org.uk. FATHMM-indel can accurately predict the functional impact and prioritise small indels throughout the whole non-coding genome.

  14. Coding and decoding libraries of sequence-defined functional copolymers synthesized via photoligation.

    Science.gov (United States)

    Zydziak, Nicolas; Konrad, Waldemar; Feist, Florian; Afonin, Sergii; Weidner, Steffen; Barner-Kowollik, Christopher

    2016-11-30

    Designing artificial macromolecules with absolute sequence order represents a considerable challenge. Here we report an advanced light-induced avenue to monodisperse sequence-defined functional linear macromolecules up to decamers via a unique photochemical approach. The versatility of the synthetic strategy-combining sequential and modular concepts-enables the synthesis of perfect macromolecules varying in chemical constitution and topology. Specific functions are placed at arbitrary positions along the chain via the successive addition of monomer units and blocks, leading to a library of functional homopolymers, alternating copolymers and block copolymers. The in-depth characterization of each sequence-defined chain confirms the precision nature of the macromolecules. Decoding of the functional information contained in the molecular structure is achieved via tandem mass spectrometry without recourse to their synthetic history, showing that the sequence information can be read. We submit that the presented photochemical strategy is a viable and advanced concept for coding individual monomer units along a macromolecular chain.

  15. Tokamak Systems Code

    International Nuclear Information System (INIS)

    Reid, R.L.; Barrett, R.J.; Brown, T.G.

    1985-03-01

    The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged

  16. Critical Care Coding for Neurologists.

    Science.gov (United States)

    Nuwer, Marc R; Vespa, Paul M

    2015-10-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  17. Comprehensive analysis of coding-lncRNA gene co-expression network uncovers conserved functional lncRNAs in zebrafish.

    Science.gov (United States)

    Chen, Wen; Zhang, Xuan; Li, Jing; Huang, Shulan; Xiang, Shuanglin; Hu, Xiang; Liu, Changning

    2018-05-09

    Zebrafish is a full-developed model system for studying development processes and human disease. Recent studies of deep sequencing had discovered a large number of long non-coding RNAs (lncRNAs) in zebrafish. However, only few of them had been functionally characterized. Therefore, how to take advantage of the mature zebrafish system to deeply investigate the lncRNAs' function and conservation is really intriguing. We systematically collected and analyzed a series of zebrafish RNA-seq data, then combined them with resources from known database and literatures. As a result, we obtained by far the most complete dataset of zebrafish lncRNAs, containing 13,604 lncRNA genes (21,128 transcripts) in total. Based on that, a co-expression network upon zebrafish coding and lncRNA genes was constructed and analyzed, and used to predict the Gene Ontology (GO) and the KEGG annotation of lncRNA. Meanwhile, we made a conservation analysis on zebrafish lncRNA, identifying 1828 conserved zebrafish lncRNA genes (1890 transcripts) that have their putative mammalian orthologs. We also found that zebrafish lncRNAs play important roles in regulation of the development and function of nervous system; these conserved lncRNAs present a significant sequential and functional conservation, with their mammalian counterparts. By integrative data analysis and construction of coding-lncRNA gene co-expression network, we gained the most comprehensive dataset of zebrafish lncRNAs up to present, as well as their systematic annotations and comprehensive analyses on function and conservation. Our study provides a reliable zebrafish-based platform to deeply explore lncRNA function and mechanism, as well as the lncRNA commonality between zebrafish and human.

  18. Long non-coding RNAs: Mechanism of action and functional utility

    OpenAIRE

    Bhat, Shakil Ahmad; Ahmad, Syed Mudasir; Mumtaz, Peerzada Tajamul; Malik, Abrar Ahad; Dar, Mashooq Ahmad; Urwat, Uneeb; Shah, Riaz Ahmad; Ganai, Nazir Ahmad

    2016-01-01

    Recent RNA sequencing studies have revealed that most of the human genome is transcribed, but very little of the total transcriptomes has the ability to encode proteins. Long non-coding RNAs (lncRNAs) are non-coding transcripts longer than 200 nucleotides. Members of the non-coding genome include microRNA (miRNA), small regulatory RNAs and other short RNAs. Most of long non-coding RNA (lncRNAs) are poorly annotated. Recent recognition about lncRNAs highlights their effects in many biological ...

  19. Green's function method and its application to verification of diffusion models of GASFLOW code

    International Nuclear Information System (INIS)

    Xu, Z.; Travis, J.R.; Breitung, W.

    2007-07-01

    To validate the diffusion model and the aerosol particle model of the GASFLOW computer code, theoretical solutions of advection diffusion problems are developed by using the Green's function method. The work consists of a theory part and an application part. In the first part, the Green's functions of one-dimensional advection diffusion problems are solved in infinite, semi-infinite and finite domains with the Dirichlet, the Neumann and/or the Robin boundary conditions. Novel and effective image systems especially for the advection diffusion problems are made to find the Green's functions in a semi-infinite domain. Eigenfunction method is utilized to find the Green's functions in a bounded domain. In the case, key steps of a coordinate transform based on a concept of reversed time scale, a Laplace transform and an exponential transform are proposed to solve the Green's functions. Then the product rule of the multi-dimensional Green's functions is discussed in a Cartesian coordinate system. Based on the building blocks of one-dimensional Green's functions, the multi-dimensional Green's function solution can be constructed by applying the product rule. Green's function tables are summarized to facilitate the application of the Green's function. In the second part, the obtained Green's function solutions benchmark a series of validations to the diffusion model of gas species in continuous phase and the diffusion model of discrete aerosol particles in the GASFLOW code. Perfect agreements are obtained between the GASFLOW simulations and the Green's function solutions in case of the gas diffusion. Very good consistencies are found between the theoretical solutions of the advection diffusion equations and the numerical particle distributions in advective flows, when the drag force between the micron-sized particles and the conveying gas flow meets the Stokes' law about resistance. This situation is corresponding to a very small Reynolds number based on the particle

  20. Ethical orientation, functional linguistics, and the codes of ethics of the Canadian Nurses Association and the Canadian Medical Association.

    Science.gov (United States)

    Hadjistavropoulos, Thomas; Malloy, David C; Douaud, Patrick; Smythe, William E

    2002-09-01

    The literature on codes of ethics suggests that grammatical and linguistic structures as well as the theoretical ethical orientation conveyed in codes of ethics have implications for the manner in which such codes are received by those bound by them. Certain grammatical and linguistic structures, for example, tend to have an authoritarian and disempowering impact while others can be empowering. The authors analyze and compare the codes of ethics of the Canadian Nurses Association (CNA) and the Canadian Medical Association (CMA) in terms of their ethical orientation and grammatical/linguistic structures. The results suggest that the two codes differ substantially along these two dimensions. The CNA code contains proportionally more statements that provide a rationale for ethical behaviour; the statements of the CMA code tend to be more dogmatic. Functional grammar analysis suggests that both codes convey a strong deontological tone that does not enhance the addressee's ability to engage in discretionary decision-making. The nurses' code nonetheless implies a collaborative relationship with the client, whereas the medical code implies that the patient is the recipient of medical wisdom. The implications of these findings are discussed.

  1. Performance and Complexity of Tunable Sparse Network Coding with Gradual Growing Tuning Functions over Wireless Networks

    DEFF Research Database (Denmark)

    Garrido, Pablo; Sørensen, Chres Wiant; Roetter, Daniel Enrique Lucani

    2016-01-01

    Random Linear Network Coding (RLNC) has been shown to be a technique with several benefits, in particular when applied over wireless mesh networks, since it provides robustness against packet losses. On the other hand, Tunable Sparse Network Coding (TSNC) is a promising concept, which leverages...... a trade-off between computational complexity and goodput. An optimal density tuning function has not been found yet, due to the lack of a closed-form expression that links density, performance and computational cost. In addition, it would be difficult to implement, due to the feedback delay. In this work...

  2. Coded aperture imaging: the modulation transfer function for uniformly redundant arrays

    International Nuclear Information System (INIS)

    Fenimore, E.E.

    1980-01-01

    Coded aperture imaging uses many pinholes to increase the SNR for intrinsically weak sources when the radiation can be neither reflected nor refracted. Effectively, the signal is multiplexed onto an image and then decoded, often by a computer, to form a reconstructed image. We derive the modulation transfer function (MTF) of such a system employing uniformly redundant arrays (URA). We show that the MTF of a URA system is virtually the same as the MTF of an individual pinhole regardless of the shape or size of the pinhole. Thus, only the location of the pinholes is important for optimum multiplexing and decoding. The shape and size of the pinholes can then be selected based on other criteria. For example, one can generate self-supporting patterns, useful for energies typically encountered in the imaging of laser-driven compressions or in soft x-ray astronomy. Such patterns contain holes that are all the same size, easing the etching or plating fabrication efforts for the apertures. A new reconstruction method is introduced called delta decoding. It improves the resolution capabilities of a coded aperture system by mitigating a blur often introduced during the reconstruction step

  3. Coupling the severe accident code SCDAP with the system thermal hydraulic code MARS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Jin; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2004-07-01

    MARS is a best-estimate system thermal hydraulics code with multi-dimensional modeling capability. One of the aims in MARS code development is to make it a multi-functional code system with the analysis capability to cover the entire accident spectrum. For this purpose, MARS code has been coupled with a number of other specialized codes such as CONTEMPT for containment analysis, and MASTER for 3-dimensional kinetics. And in this study, the SCDAP code has been coupled with MARS to endow the MARS code system with severe accident analysis capability. With the SCDAP, MARS code system now has acquired the capability to simulate such severe accident related phenomena as cladding oxidation, melting and slumping of fuel and reactor structures.

  4. Coupling the severe accident code SCDAP with the system thermal hydraulic code MARS

    International Nuclear Information System (INIS)

    Lee, Young Jin; Chung, Bub Dong

    2004-01-01

    MARS is a best-estimate system thermal hydraulics code with multi-dimensional modeling capability. One of the aims in MARS code development is to make it a multi-functional code system with the analysis capability to cover the entire accident spectrum. For this purpose, MARS code has been coupled with a number of other specialized codes such as CONTEMPT for containment analysis, and MASTER for 3-dimensional kinetics. And in this study, the SCDAP code has been coupled with MARS to endow the MARS code system with severe accident analysis capability. With the SCDAP, MARS code system now has acquired the capability to simulate such severe accident related phenomena as cladding oxidation, melting and slumping of fuel and reactor structures

  5. When sparse coding meets ranking: a joint framework for learning sparse codes and ranking scores

    KAUST Repository

    Wang, Jim Jing-Yan

    2017-06-28

    Sparse coding, which represents a data point as a sparse reconstruction code with regard to a dictionary, has been a popular data representation method. Meanwhile, in database retrieval problems, learning the ranking scores from data points plays an important role. Up to now, these two problems have always been considered separately, assuming that data coding and ranking are two independent and irrelevant problems. However, is there any internal relationship between sparse coding and ranking score learning? If yes, how to explore and make use of this internal relationship? In this paper, we try to answer these questions by developing the first joint sparse coding and ranking score learning algorithm. To explore the local distribution in the sparse code space, and also to bridge coding and ranking problems, we assume that in the neighborhood of each data point, the ranking scores can be approximated from the corresponding sparse codes by a local linear function. By considering the local approximation error of ranking scores, the reconstruction error and sparsity of sparse coding, and the query information provided by the user, we construct a unified objective function for learning of sparse codes, the dictionary and ranking scores. We further develop an iterative algorithm to solve this optimization problem.

  6. TRIPOLI-4: Monte Carlo transport code functionalities and applications

    International Nuclear Information System (INIS)

    Both, J.P.; Lee, Y.K.; Mazzolo, A.; Peneliau, Y.; Petit, O.; Roesslinger, B.

    2003-01-01

    Tripoli-4 is a three dimensional calculations code using the Monte Carlo method to simulate the transport of neutrons, photons, electrons and positrons. This code is used in four application fields: the protection studies, the criticality studies, the core studies and the instrumentation studies. Geometry, cross sections, description of sources, principle. (N.C.)

  7. Memory for pictures and words as a function of level of processing: Depth or dual coding?

    Science.gov (United States)

    D'Agostino, P R; O'Neill, B J; Paivio, A

    1977-03-01

    The experiment was designed to test differential predictions derived from dual-coding and depth-of-processing hypotheses. Subjects under incidental memory instructions free recalled a list of 36 test events, each presented twice. Within the list, an equal number of events were assigned to structural, phonemic, and semantic processing conditions. Separate groups of subjects were tested with a list of pictures, concrete words, or abstract words. Results indicated that retention of concrete words increased as a direct function of the processing-task variable (structural memory performance. These data provided strong support for the dual-coding model.

  8. Development of SCINFUL-CG code to calculate response functions of scintillators in various shapes used for neutron measurement

    Energy Technology Data Exchange (ETDEWEB)

    Endo, Akira; Kim, Eunjoo; Yamaguchi, Yasuhiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-10-01

    A Monte Carlo code SCINFUL has been utilized for calculating response functions of organic scintillators for high-energy neutron spectroscopy. However, the applicability of SCINFUL is limited to the calculations for cylindrical NE213 and NE110 scintillators. In the present study, SCINFUL-CG was developed by introducing a geometry specifying function and high-energy neutron cross section data into SCINFUL. The geometry package MARS-CG, the extended version of the CG (Combinatorial Geometry), was programmed into SCINFUL-CG to express various geometries of detectors. Neutron spectra in the regions specified by the CG can be evaluated by the track length estimator. The cross section data of silicon, oxygen and aluminum for neutron transport calculation were incorporated up to 100 MeV using the data of LA150 library. Validity of SCINFUL-CG was examined by comparing calculated results with those by SCINFUL and MCNP and experimental data measured using high-energy neutron fields. SCINFUL-CG can be used for the calculations of the response functions and neutron spectra in the organic scintillators in various shapes. The computer code will be applicable to the designs of high-energy neutron spectrometers and neutron monitors using the organic scintillators. The present report describes the new features of SCINFUL-CG and explains how to use the code. (author)

  9. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  10. A Radiation Chemistry Code Based on the Greens Functions of the Diffusion Equation

    Science.gov (United States)

    Plante, Ianik; Wu, Honglu

    2014-01-01

    Ionizing radiation produces several radiolytic species such as.OH, e-aq, and H. when interacting with biological matter. Following their creation, radiolytic species diffuse and chemically react with biological molecules such as DNA. Despite years of research, many questions on the DNA damage by ionizing radiation remains, notably on the indirect effect, i.e. the damage resulting from the reactions of the radiolytic species with DNA. To simulate DNA damage by ionizing radiation, we are developing a step-by-step radiation chemistry code that is based on the Green's functions of the diffusion equation (GFDE), which is able to follow the trajectories of all particles and their reactions with time. In the recent years, simulations based on the GFDE have been used extensively in biochemistry, notably to simulate biochemical networks in time and space and are often used as the "gold standard" to validate diffusion-reaction theories. The exact GFDE for partially diffusion-controlled reactions is difficult to use because of its complex form. Therefore, the radial Green's function, which is much simpler, is often used. Hence, much effort has been devoted to the sampling of the radial Green's functions, for which we have developed a sampling algorithm This algorithm only yields the inter-particle distance vector length after a time step; the sampling of the deviation angle of the inter-particle vector is not taken into consideration. In this work, we show that the radial distribution is predicted by the exact radial Green's function. We also use a technique developed by Clifford et al. to generate the inter-particle vector deviation angles, knowing the inter-particle vector length before and after a time step. The results are compared with those predicted by the exact GFDE and by the analytical angular functions for free diffusion. This first step in the creation of the radiation chemistry code should help the understanding of the contribution of the indirect effect in the

  11. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  12. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  13. Computational Approaches Reveal New Insights into Regulation and Function of Non; coding RNAs and their Targets

    KAUST Repository

    Alam, Tanvir

    2016-11-28

    Regulation and function of protein-coding genes are increasingly well-understood, but no comparable evidence exists for non-coding RNA (ncRNA) genes, which appear to be more numerous than protein-coding genes. We developed a novel machine-learning model to distinguish promoters of long ncRNA (lncRNA) genes from those of protein-coding genes. This represents the first attempt to make this distinction based on properties of the associated gene promoters. From our analyses, several transcription factors (TFs), which are known to be regulated by lncRNAs, also emerged as potential global regulators of lncRNAs, suggesting that lncRNAs and TFs may participate in bidirectional feedback regulatory network. Our results also raise the possibility that, due to the historical dependence on protein-coding gene in defining the chromatin states of active promoters, an adjustment of these chromatin signature profiles to incorporate lncRNAs is warranted in the future. Secondly, we developed a novel method to infer functions for lncRNA and microRNA (miRNA) transcripts based on their transcriptional regulatory networks in 119 tissues and 177 primary cells of human. This method for the first time combines information of cell/tissueVspecific expression of a transcript and the TFs and transcription coVfactors (TcoFs) that control activation of that transcript. Transcripts were annotated using statistically enriched GO terms, pathways and diseases across cells/tissues and associated knowledgebase (FARNA) is developed. FARNA, having the most comprehensive function annotation of considered ncRNAs across the widest spectrum of cells/tissues, has a potential to contribute to our understanding of ncRNA roles and their regulatory mechanisms in human. Thirdly, we developed a novel machine-learning model to identify LD motif (a protein interaction motif) of paxillin, a ncRNA target that is involved in cell motility and cancer metastasis. Our recognition model identified new proteins not

  14. Code OK3 - An upgraded version of OK2 with beam wobbling function

    Science.gov (United States)

    Ogoyski, A. I.; Kawata, S.; Popov, P. H.

    2010-07-01

    structure, including beam wobbling function. Reasons for new version: The code OK3 is based on OK2 [3] and uses the same algorithm with some improvements, the most important one is the beam wobbling function. Summary of revisions:In the code OK3, beams are subdivided on many bunches. The displacement of each bunch center from the initial beam direction is calculated. Code OK3 allows the beamlet number to vary from bunch to bunch. That reduces the calculation error especially in case of very complicated mesh structure with big internal holes. The target temperature rises during the time of energy deposition. Some procedures are improved to perform faster. The energy conservation is checked up on each step of calculation process and corrected if necessary. New procedures included in OK3 Procedure BeamCenterRot( ) rotates the beam axis around the impinging direction of each beam. Procedure BeamletRot( ) rotates the beamlet axes that belong to each beam. Procedure Rotation( ) sets the coordinates of rotated beams and beamlets in chamber and pellet systems. Procedure BeamletOut( ) calculates the lost energy of ions that have not impinged on the target. Procedure TargetT( ) sets the temperature of the target layer of energy deposition during the irradiation process. Procedure ECL( ) checks up the energy conservation law at each step of the energy deposition process. Procedure ECLt( ) performs the final check up of the energy conservation law at the end of deposition process. Modified procedures in OK3 Procedure InitBeam( ): This procedure initializes the beam radius and coefficients A1, A2, A3, A4 and A5 for Gauss distributed beams [2]. It is enlarged in OK3 and can set beams with radii from 1 to 20 mm. Procedure kBunch( ) is modified to allow beamlet number variation from bunch to bunch during the deposition. Procedure ijkSp( ) and procedure Hole( ) are modified to perform faster. Procedure Espl( ) and procedure ChechE( ) are modified to increase the calculation accuracy

  15. Parents' Assessments of Disability in Their Children Using World Health Organization International Classification of Functioning, Disability and Health, Child and Youth Version Joined Body Functions and Activity Codes Related to Everyday Life

    DEFF Research Database (Denmark)

    Illum, Niels Ove; Gradel, Kim Oren

    2017-01-01

    : Parents of 162 children with spina bifida, spinal muscular atrophy, muscular disorders, cerebral palsy, visual impairment, hearing impairment, mental disability, or disability following brain tumours performed scoring for 26 body functions qualifiers (b codes) and activities and participation qualifiers......AIM: To help parents assess disability in their own children using World Health Organization (WHO) International Classification of Functioning, Disability and Health, Child and Youth Version (ICF-CY) code qualifier scoring and to assess the validity and reliability of the data sets obtained. METHOD...... of 1.01 and 1.00. The mean corresponding outfit MNSQ was 1.05 and 1.01. The ICF-CY code τ thresholds and category measures were continuous when assessed and reassessed by parents. Participating children had a mean of 56 codes scores (range: 26-130) before and a mean of 55.9 scores (range: 25-125) after...

  16. CITOPP, CITMOD, CITWI, Processing codes for CITATION Code

    International Nuclear Information System (INIS)

    Albarhoum, M.

    2008-01-01

    Description of program or function: CITOPP processes the output file of the CITATION 3-D diffusion code. The program can plot axial, radial and circumferential flux distributions (in cylindrical geometry) in addition to the multiplication factor convergence. The flux distributions can be drawn for each group specified by the program and visualized on the screen. CITMOD processes both the output and the input files of the CITATION 3-D diffusion code. CITMOD can visualize both axial, and radial-angular models of the reactor described by CITATION input/output files. CITWI processes the input file (CIT.INP) of CITATION 3-D diffusion code. CIT.INP is processed to deduce the dimensions of the cell whose cross sections can be representative of the homonym reactor component in section 008 of CIT.INP

  17. Code Samples Used for Complexity and Control

    Science.gov (United States)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents

  18. Sudan-decoding generalized geometric Goppa codes

    DEFF Research Database (Denmark)

    Heydtmann, Agnes Eileen

    2003-01-01

    Generalized geometric Goppa codes are vector spaces of n-tuples with entries from different extension fields of a ground field. They are derived from evaluating functions similar to conventional geometric Goppa codes, but allowing evaluation in places of arbitrary degree. A decoding scheme...... for these codes based on Sudan's improved algorithm is presented and its error-correcting capacity is analyzed. For the implementation of the algorithm it is necessary that the so-called increasing zero bases of certain spaces of functions are available. A method to obtain such bases is developed....

  19. Explicit MDS Codes with Complementary Duals

    DEFF Research Database (Denmark)

    Beelen, Duals Peter; Jin, Lingfei

    2018-01-01

    In 1964, Massey introduced a class of codes with complementary duals which are called Linear Complimentary Dual (LCD for short) codes. He showed that LCD codes have applications in communication system, side-channel attack (SCA) and so on. LCD codes have been extensively studied in literature....... On the other hand, MDS codes form an optimal family of classical codes which have wide applications in both theory and practice. The main purpose of this paper is to give an explicit construction of several classes of LCD MDS codes, using tools from algebraic function fields. We exemplify this construction...

  20. Decoding the function of nuclear long non-coding RNAs.

    Science.gov (United States)

    Chen, Ling-Ling; Carmichael, Gordon G

    2010-06-01

    Long non-coding RNAs (lncRNAs) are mRNA-like, non-protein-coding RNAs that are pervasively transcribed throughout eukaryotic genomes. Rather than silently accumulating in the nucleus, many of these are now known or suspected to play important roles in nuclear architecture or in the regulation of gene expression. In this review, we highlight some recent progress in how lncRNAs regulate these important nuclear processes at the molecular level. Copyright 2010 Elsevier Ltd. All rights reserved.

  1. Coded Modulation in C and MATLAB

    Science.gov (United States)

    Hamkins, Jon; Andrews, Kenneth S.

    2011-01-01

    This software, written separately in C and MATLAB as stand-alone packages with equivalent functionality, implements encoders and decoders for a set of nine error-correcting codes and modulators and demodulators for five modulation types. The software can be used as a single program to simulate the performance of such coded modulation. The error-correcting codes implemented are the nine accumulate repeat-4 jagged accumulate (AR4JA) low-density parity-check (LDPC) codes, which have been approved for international standardization by the Consultative Committee for Space Data Systems, and which are scheduled to fly on a series of NASA missions in the Constellation Program. The software implements the encoder and decoder functions, and contains compressed versions of generator and parity-check matrices used in these operations.

  2. User manual of UNF code

    International Nuclear Information System (INIS)

    Zhang Jingshang

    2001-01-01

    The UNF code (2001 version) written in FORTRAN-90 is developed for calculating fast neutron reaction data of structure materials with incident energies from about 1 Kev up to 20 Mev. The code consists of the spherical optical model, the unified Hauser-Feshbach and exciton model. The man nal of the UNF code is available for users. The format of the input parameter files and the output files, as well as the functions of flag used in UNF code, are introduced in detail, and the examples of the format of input parameters files are given

  3. The metaethics of nursing codes of ethics and conduct.

    Science.gov (United States)

    Snelling, Paul C

    2016-10-01

    Nursing codes of ethics and conduct are features of professional practice across the world, and in the UK, the regulator has recently consulted on and published a new code. Initially part of a professionalising agenda, nursing codes have recently come to represent a managerialist and disciplinary agenda and nursing can no longer be regarded as a self-regulating profession. This paper argues that codes of ethics and codes of conduct are significantly different in form and function similar to the difference between ethics and law in everyday life. Some codes successfully integrate these two functions within the same document, while others, principally the UK Code, conflate them resulting in an ambiguous document unable to fulfil its functions effectively. The paper analyses the differences between ethical-codes and conduct-codes by discussing titles, authorship, level, scope for disagreement, consequences of transgression, language and finally and possibly most importantly agent-centeredness. It is argued that conduct-codes cannot require nurses to be compassionate because compassion involves an emotional response. The concept of kindness provides a plausible alternative for conduct-codes as it is possible to understand it solely in terms of acts. But if kindness is required in conduct-codes, investigation and possible censure follows from its absence. Using examples it is argued that there are at last five possible accounts of the absence of kindness. As well as being potentially problematic for disciplinary panels, difficulty in understanding the features of blameworthy absence of kindness may challenge UK nurses who, following a recently introduced revalidation procedure, are required to reflect on their practice in relation to The Code. It is concluded that closer attention to metaethical concerns by code writers will better support the functions of their issuing organisations. © 2016 John Wiley & Sons Ltd.

  4. Usage of burnt fuel isotopic compositions from engineering codes in Monte-Carlo code calculations

    International Nuclear Information System (INIS)

    Aleshin, Sergey S.; Gorodkov, Sergey S.; Shcherenko, Anna I.

    2015-01-01

    A burn-up calculation of VVER's cores by Monte-Carlo code is complex process and requires large computational costs. This fact makes Monte-Carlo codes usage complicated for project and operating calculations. Previously prepared isotopic compositions are proposed to use for the Monte-Carlo code (MCU) calculations of different states of VVER's core with burnt fuel. Isotopic compositions are proposed to calculate by an approximation method. The approximation method is based on usage of a spectral functionality and reference isotopic compositions, that are calculated by engineering codes (TVS-M, PERMAK-A). The multiplication factors and power distributions of FA and VVER with infinite height are calculated in this work by the Monte-Carlo code MCU using earlier prepared isotopic compositions. The MCU calculation data were compared with the data which were obtained by engineering codes.

  5. ANIMAL code

    International Nuclear Information System (INIS)

    Lindemuth, I.R.

    1979-01-01

    This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables

  6. Functional and crystallographic characterization of Salmonella typhimurium Cu,Zn superoxide dismutase coded by the sodCI virulence gene

    NARCIS (Netherlands)

    Pesce, A; Battistoni, A; Stroppolo, ME; Polizio, F; Nardini, M; Kroll, JS; Langford, PR; O'Neill, P; Sette, M; Desideri, A; Bolognesi, M

    2000-01-01

    The functional and three-dimensional structural features of Cu,Zn superoxide dismutase coded by the Salmonella typhimurium sodCI gene, have been characterized. Measurements of the catalytic rate indicate that this enzyme is the most efficient superoxide dismutase analyzed so far, a feature that may

  7. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules, F9-F11

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes.

  8. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules, F9-F11

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes

  9. A plug-in to Eclipse for VHDL source codes: functionalities

    Science.gov (United States)

    Niton, B.; Poźniak, K. T.; Romaniuk, R. S.

    The paper presents an original application, written by authors, which supports writing and edition of source codes in VHDL language. It is a step towards fully automatic, augmented code writing for photonic and electronic systems, also systems based on FPGA and/or DSP processors. An implementation is described, based on VEditor. VEditor is a free license program. Thus, the work presented in this paper supplements and extends this free license. The introduction characterizes shortly available tools on the market which serve for aiding the design processes of electronic systems in VHDL. Particular attention was put on plug-ins to the Eclipse environment and Emacs program. There are presented detailed properties of the written plug-in such as: programming extension conception, and the results of the activities of formatter, re-factorizer, code hider, and other new additions to the VEditor program.

  10. APPLE-3: improvement of APPLE for neutron and gamma-ray flux, spectrum and reaction rate plotting code, and of its code manual

    International Nuclear Information System (INIS)

    Kawasaki, Hiromitu; Maki, Koichi; Seki, Yasushi.

    1991-03-01

    A code APPLE was produced in 1976 for calculating and plotting tritium breeding ratio and tritium production rate distributions. That code was improved as 'APPLE-2' in 1982, to calculate and plot not only tritium breeding ratio but also distributions of neutron and gamma-ray fluxes, their spectra, nuclear heating rates and other reaction rates, and dose rate distributions during operation and after shutdown in 1982. The code APPLE-2 can calculate and plot these nuclear properties derived from neutron and gamma-ray fluxes by ANISN (one dimensional transport code), DOT3.5 (two dimensional transport code) and MORSE (three dimensional Monte Carlo code). We revised the code APPLE-2 as 'APPLE-3' by adding many functions to the APPLE-2 code in accordance with users' requirements proposed in recent progress of fusion reaction nuclear design. With minor modification of APPLE-2, a number of inconsistencies have been found between the code manual and the input data in the code. In the present report, the new functions added to APPLE-2 and improved users' manual are explained. (author)

  11. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  12. Input/output manual of light water reactor fuel performance code FEMAXI-7 and its related codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Motoe; Udagawa, Yutaka; Nagase, Fumihisa [Japan Atomic Energy Agency, Nuclear Safety Research Center, Tokai, Ibaraki (Japan); Saitou, Hiroaki [ITOCHU Techno-Solutions Corp., Tokyo (Japan)

    2012-07-15

    A light water reactor fuel analysis code FEMAXI-7 has been developed for the purpose of analyzing the fuel behavior in normal conditions and in anticipated transient conditions. Numerous functional improvements and extensions have been incorporated in FEMAXI-7, which has been fully disclosed in the code model description published recently as JAEA-Data/Code 2010-035. The present manual, which is the counterpart of this description, gives detailed explanations of operation method of FEMAXI-7 code and its related codes, methods of Input/Output, methods of source code modification, features of subroutine modules, and internal variables in a specific manner in order to facilitate users to perform a fuel analysis with FEMAXI-7. This report includes some descriptions which are modified from the original contents of JAEA-Data/Code 2010-035. A CD-ROM is attached as an appendix. (author)

  13. Input/output manual of light water reactor fuel performance code FEMAXI-7 and its related codes

    International Nuclear Information System (INIS)

    Suzuki, Motoe; Udagawa, Yutaka; Nagase, Fumihisa; Saitou, Hiroaki

    2012-07-01

    A light water reactor fuel analysis code FEMAXI-7 has been developed for the purpose of analyzing the fuel behavior in normal conditions and in anticipated transient conditions. Numerous functional improvements and extensions have been incorporated in FEMAXI-7, which has been fully disclosed in the code model description published recently as JAEA-Data/Code 2010-035. The present manual, which is the counterpart of this description, gives detailed explanations of operation method of FEMAXI-7 code and its related codes, methods of Input/Output, methods of source code modification, features of subroutine modules, and internal variables in a specific manner in order to facilitate users to perform a fuel analysis with FEMAXI-7. This report includes some descriptions which are modified from the original contents of JAEA-Data/Code 2010-035. A CD-ROM is attached as an appendix. (author)

  14. Electromagnetic reprogrammable coding-metasurface holograms.

    Science.gov (United States)

    Li, Lianlin; Jun Cui, Tie; Ji, Wei; Liu, Shuo; Ding, Jun; Wan, Xiang; Bo Li, Yun; Jiang, Menghua; Qiu, Cheng-Wei; Zhang, Shuang

    2017-08-04

    Metasurfaces have enabled a plethora of emerging functions within an ultrathin dimension, paving way towards flat and highly integrated photonic devices. Despite the rapid progress in this area, simultaneous realization of reconfigurability, high efficiency, and full control over the phase and amplitude of scattered light is posing a great challenge. Here, we try to tackle this challenge by introducing the concept of a reprogrammable hologram based on 1-bit coding metasurfaces. The state of each unit cell of the coding metasurface can be switched between '1' and '0' by electrically controlling the loaded diodes. Our proof-of-concept experiments show that multiple desired holographic images can be realized in real time with only a single coding metasurface. The proposed reprogrammable hologram may be a key in enabling future intelligent devices with reconfigurable and programmable functionalities that may lead to advances in a variety of applications such as microscopy, display, security, data storage, and information processing.Realizing metasurfaces with reconfigurability, high efficiency, and control over phase and amplitude is a challenge. Here, Li et al. introduce a reprogrammable hologram based on a 1-bit coding metasurface, where the state of each unit cell of the coding metasurface can be switched electrically.

  15. Input/output manual of light water reactor fuel analysis code FEMAXI-7 and its related codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Motoe; Udagawa, Yutaka; Nagase, Fumihisa [Japan Atomic Energy Agency, Nuclear Safety Research Center, Tokai, Ibaraki (Japan); Saitou, Hiroaki [ITOCHU Techno-Solutions Corporation, Tokyo (Japan)

    2013-10-15

    A light water reactor fuel analysis code FEMAXI-7 has been developed, as an extended version from the former version FEMAXI-6, for the purpose of analyzing the fuel behavior in normal conditions and in anticipated transient conditions. Numerous functional improvements and extensions have been incorporated in FEMAXI-7, which are fully disclosed in the code model description published in the form of another JAEA-Data/Code report. The present manual, which is the very counterpart of this description document, gives detailed explanations of files and operation method of FEMAXI-7 code and its related codes, methods of input/output, sample Input/Output, methods of source code modification, subroutine structure, and internal variables in a specific manner in order to facilitate users to perform fuel analysis by FEMAXI-7. (author)

  16. Input/output manual of light water reactor fuel analysis code FEMAXI-7 and its related codes

    International Nuclear Information System (INIS)

    Suzuki, Motoe; Udagawa, Yutaka; Nagase, Fumihisa; Saitou, Hiroaki

    2013-10-01

    A light water reactor fuel analysis code FEMAXI-7 has been developed, as an extended version from the former version FEMAXI-6, for the purpose of analyzing the fuel behavior in normal conditions and in anticipated transient conditions. Numerous functional improvements and extensions have been incorporated in FEMAXI-7, which are fully disclosed in the code model description published in the form of another JAEA-Data/Code report. The present manual, which is the very counterpart of this description document, gives detailed explanations of files and operation method of FEMAXI-7 code and its related codes, methods of input/output, sample Input/Output, methods of source code modification, subroutine structure, and internal variables in a specific manner in order to facilitate users to perform fuel analysis by FEMAXI-7. (author)

  17. Classification and modelling of functional outputs of computation codes. Application to accidental thermal-hydraulic calculations in pressurized water reactor (PWR)

    International Nuclear Information System (INIS)

    Auder, Benjamin

    2011-01-01

    This research thesis has been made within the frame of a project on nuclear reactor vessel life. It deals with the use of numerical codes aimed at estimating probability densities for every input parameter in order to calculate probability margins at the output level. More precisely, it deals with codes with one-dimensional functional responses. The author studies the numerical simulation of a pressurized thermal shock on a nuclear reactor vessel, i.e. one of the possible accident types. The study of the vessel integrity relies on a thermal-hydraulic analysis and on a mechanical analysis. Algorithms are developed and proposed for each of them. Input-output data are classified using a clustering technique and a graph-based representation. A method for output dimension reduction is proposed, and a regression is applied between inputs and reduced representations. Applications are discussed in the case of modelling and sensitivity analysis for the CATHARE code (a code used at the CEA for the thermal-hydraulic analysis)

  18. Clean Code - Why you should care

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    - Martin Fowler Writing code is communication, not solely with the computer that executes it, but also with other developers and with oneself. A developer spends a lot of his working time reading and understanding code that was written by other developers or by himself in the past. The readability of the code plays an important factor for the time to find a bug or add new functionality, which in turn has a big impact on the productivity. Code that is difficult to undestand, hard to maintain and refactor, and offers many spots for bugs to hide is not considered to be "clean code". But what could considered as "clean code" and what are the advantages of a strict application of its guidelines? In this presentation we will take a look on some typical "code smells" and proposed guidelines to improve your coding skills to write cleaner code that is less bug prone and better to maintain.

  19. Anodizing color coded anodized Ti6Al4V medical devices for increasing bone cell functions.

    Science.gov (United States)

    Ross, Alexandra P; Webster, Thomas J

    2013-01-01

    Current titanium-based implants are often anodized in sulfuric acid (H(2)SO(4)) for color coding purposes. However, a crucial parameter in selecting the material for an orthopedic implant is the degree to which it will integrate into the surrounding bone. Loosening at the bone-implant interface can cause catastrophic failure when motion occurs between the implant and the surrounding bone. Recently, a different anodization process using hydrofluoric acid has been shown to increase bone growth on commercially pure titanium and titanium alloys through the creation of nanotubes. The objective of this study was to compare, for the first time, the influence of anodizing a titanium alloy medical device in sulfuric acid for color coding purposes, as is done in the orthopedic implant industry, followed by anodizing the device in hydrofluoric acid to implement nanotubes. Specifically, Ti6Al4V model implant samples were anodized first with sulfuric acid to create color-coding features, and then with hydrofluoric acid to implement surface features to enhance osteoblast functions. The material surfaces were characterized by visual inspection, scanning electron microscopy, contact angle measurements, and energy dispersive spectroscopy. Human osteoblasts were seeded onto the samples for a series of time points and were measured for adhesion and proliferation. After 1 and 2 weeks, the levels of alkaline phosphatase activity and calcium deposition were measured to assess the long-term differentiation of osteoblasts into the calcium depositing cells. The results showed that anodizing in hydrofluoric acid after anodizing in sulfuric acid partially retains color coding and creates unique surface features to increase osteoblast adhesion, proliferation, alkaline phosphatase activity, and calcium deposition. In this manner, this study provides a viable method to anodize an already color coded, anodized titanium alloy to potentially increase bone growth for numerous implant applications.

  20. Resistive plate chamber neutron and gamma sensitivity measurement with a {sup 252}Cf source

    Energy Technology Data Exchange (ETDEWEB)

    Abbrescia, M.; Altieri, S.; Baratti, V.; Barnaba, O.; Belli, G.; Bruno, G.; Colaleo, A.; DeVecchi, C.; Guida, R. E-mail: roberto.guida@pv.infn.it; Iaselli, G.; Imbres, E.; Loddo, F.; Maggi, M.; Marangelli, B.; Musitelli, G.; Nardo, R.; Natali, S.; Nuzzo, S.; Pugliese, G.; Ranieri, A.; Ratti, S.; Riccardi, C.; Romano, F.; Torre, P.; Vicini, A.; Vitulo, P.; Volpe, F

    2003-06-21

    A bakelite double gap Resistive Plate Chamber (RPC), operating in avalanche mode, has been exposed to the radiation emitted from a {sup 252}Cf source to measure its neutron and gamma sensitivity. One of the two gaps underwent the traditional electrodes surface coating with linseed oil. RPC signals were triggered by fission events detected using BaF{sub 2} scintillators. A Monte Carlo code, inside the GEANT 3.21 framework with MICAP interface, has been used to identify the gamma and neutron contributions to the total number of collected RPC signals. A neutron sensitivity of (0.63{+-}0.02)x10{sup -3} (average energy 2 MeV) and a gamma sensitivity of (14.0{+-}0.5)x10{sup -3} (average energy 1.5 MeV) have been measured in double gap mode. Measurements done in single gap mode have shown that both neutron and gamma sensitivity are independent of the oiling treatment.

  1. TRIPOLI-4: Monte Carlo transport code functionalities and applications; TRIPOLI-4: code de transport Monte Carlo fonctionnalites et applications

    Energy Technology Data Exchange (ETDEWEB)

    Both, J P; Lee, Y K; Mazzolo, A; Peneliau, Y; Petit, O; Roesslinger, B [CEA Saclay, Dir. de l' Energie Nucleaire (DEN), Service d' Etudes de Reacteurs et de Modelisation Avancee, 91 - Gif sur Yvette (France)

    2003-07-01

    Tripoli-4 is a three dimensional calculations code using the Monte Carlo method to simulate the transport of neutrons, photons, electrons and positrons. This code is used in four application fields: the protection studies, the criticality studies, the core studies and the instrumentation studies. Geometry, cross sections, description of sources, principle. (N.C.)

  2. Evaluation of three coding schemes designed for improved data communication

    Science.gov (United States)

    Snelsire, R. W.

    1974-01-01

    Three coding schemes designed for improved data communication are evaluated. Four block codes are evaluated relative to a quality function, which is a function of both the amount of data rejected and the error rate. The Viterbi maximum likelihood decoding algorithm as a decoding procedure is reviewed. This evaluation is obtained by simulating the system on a digital computer. Short constraint length rate 1/2 quick-look codes are studied, and their performance is compared to general nonsystematic codes.

  3. Evaluation of ASME code flaw analysis procedure using the influence function method for application to PWR primary piping

    International Nuclear Information System (INIS)

    Hong, S.Y.; Yeater, M.L.

    1985-01-01

    This paper discusses stress intensity factor calculations and fatigue analysis for a PWR primary coolant piping system. The influence function method is applied to evaluate ASME Code Section XI Appendix A ''analysis of flaw indication'' for the application to a PWR primary piping. Results of the analysis are discussed in detail. (orig.)

  4. Induction technology optimization code

    International Nuclear Information System (INIS)

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-01-01

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs

  5. Development of the versatile reactor analysis code system, MARBLE2

    International Nuclear Information System (INIS)

    Yokoyama, Kenji; Jin, Tomoyuki; Hazama, Taira; Hirai, Yasushi

    2015-07-01

    The second version of the versatile reactor analysis code system, MARBLE2, has been developed. A lot of new functions have been added in MARBLE2 by using the base technology developed in the first version (MARBLE1). Introducing the remaining functions of the conventional code system (JOINT-FR and SAGEP-FR), MARBLE2 enables one to execute almost all analysis functions of the conventional code system with the unified user interfaces of its subsystem, SCHEME. In particular, the sensitivity analysis functionality is available in MARBLE2. On the other hand, new built-in solvers have been developed, and existing ones have been upgraded. Furthermore, some other analysis codes and libraries developed in JAEA have been consolidated and prepared in SCHEME. In addition, several analysis codes developed in the other institutes have been additionally introduced as plug-in solvers. Consequently, gamma-ray transport calculation and heating evaluation become available. As for another subsystem, ORPHEUS, various functionality updates and speed-up techniques have been applied based on user experience of MARBLE1 to enhance its usability. (author)

  6. MARS code manual volume I: code structure, system models, and solution methods

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu; Yoon, Churl

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This theory manual provides a complete list of overall information of code structure and major function of MARS including code architecture, hydrodynamic model, heat structure, trip / control system and point reactor kinetics model. Therefore, this report would be very useful for the code users. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  7. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules F1-F8

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE

  8. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules F1-F8

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE.

  9. Burn-up function of fuel management code for aqueous homogeneous reactors and its validation

    International Nuclear Information System (INIS)

    Wang Liangzi; Yao Dong; Wang Kan

    2011-01-01

    Fuel Management Code for Aqueous Homogeneous Reactors (FMCAHR) is developed based on the Monte Carlo transport method, to analyze the physics characteristics of aqueous homogeneous reactors. FMCAHR has the ability of doing resonance treatment, searching for critical rod heights, thermal hydraulic parameters calculation, radiolytic-gas bubbles' calculation and bum-up calculation. This paper introduces the theory model and scheme of its burn-up function, and then compares its calculation results with benchmarks and with DRAGON's burn-up results, which confirms its bum-up computing precision and its applicability in the bum-up calculation and analysis for aqueous solution reactors. (authors)

  10. GRAYSKY-A new gamma-ray skyshine code

    International Nuclear Information System (INIS)

    Witts, D.J.; Twardowski, T.; Watmough, M.H.

    1993-01-01

    This paper describes a new prototype gamma-ray skyshine code GRAYSKY (Gamma-RAY SKYshine) that has been developed at BNFL, as part of an industrially based master of science course, to overcome the problems encountered with SKYSHINEII and RANKERN. GRAYSKY is a point kernel code based on the use of a skyshine response function. The scattering within source or shield materials is accounted for by the use of buildup factors. This is an approximate method of solution but one that has been shown to produce results that are acceptable for dose rate predictions on operating plants. The novel features of GRAYSKY are as follows: 1. The code is fully integrated with a semianalytical point kernel shielding code, currently under development at BNFL, which offers powerful solid-body modeling capabilities. 2. The geometry modeling also allows the skyshine response function to be used in a manner that accounts for the shielding of air-scattered radiation. 3. Skyshine buildup factors calculated using the skyshine response function have been used as well as dose buildup factors

  11. MINET [momentum integral network] code documentation

    International Nuclear Information System (INIS)

    Van Tuyle, G.J.; Nepsee, T.C.; Guppy, J.G.

    1989-12-01

    The MINET computer code, developed for the transient analysis of fluid flow and heat transfer, is documented in this four-part reference. In Part 1, the MINET models, which are based on a momentum integral network method, are described. The various aspects of utilizing the MINET code are discussed in Part 2, The User's Manual. The third part is a code description, detailing the basic code structure and the various subroutines and functions that make up MINET. In Part 4, example input decks, as well as recent validation studies and applications of MINET are summarized. 32 refs., 36 figs., 47 tabs

  12. Coupling External Radiation Transport Code Results to the GADRAS Detector Response Function

    International Nuclear Information System (INIS)

    Mitchell, Dean J.; Thoreson, Gregory G.; Horne, Steven M.

    2014-01-01

    Simulating gamma spectra is useful for analyzing special nuclear materials. Gamma spectra are influenced not only by the source and the detector, but also by the external, and potentially complex, scattering environment. The scattering environment can make accurate representations of gamma spectra difficult to obtain. By coupling the Monte Carlo Nuclear Particle (MCNP) code with the Gamma Detector Response and Analysis Software (GADRAS) detector response function, gamma spectrum simulations can be computed with a high degree of fidelity even in the presence of a complex scattering environment. Traditionally, GADRAS represents the external scattering environment with empirically derived scattering parameters. By modeling the external scattering environment in MCNP and using the results as input for the GADRAS detector response function, gamma spectra can be obtained with a high degree of fidelity. This method was verified with experimental data obtained in an environment with a significant amount of scattering material. The experiment used both gamma-emitting sources and moderated and bare neutron-emitting sources. The sources were modeled using GADRAS and MCNP in the presence of the external scattering environment, producing accurate representations of the experimental data.

  13. Noncoherent Spectral Optical CDMA System Using 1D Active Weight Two-Code Keying Codes

    Directory of Open Access Journals (Sweden)

    Bih-Chyun Yeh

    2016-01-01

    Full Text Available We propose a new family of one-dimensional (1D active weight two-code keying (TCK in spectral amplitude coding (SAC optical code division multiple access (OCDMA networks. We use encoding and decoding transfer functions to operate the 1D active weight TCK. The proposed structure includes an optical line terminal (OLT and optical network units (ONUs to produce the encoding and decoding codes of the proposed OLT and ONUs, respectively. The proposed ONU uses the modified cross-correlation to remove interferences from other simultaneous users, that is, the multiuser interference (MUI. When the phase-induced intensity noise (PIIN is the most important noise, the modified cross-correlation suppresses the PIIN. In the numerical results, we find that the bit error rate (BER for the proposed system using the 1D active weight TCK codes outperforms that for two other systems using the 1D M-Seq codes and 1D balanced incomplete block design (BIBD codes. The effective source power for the proposed system can achieve −10 dBm, which has less power than that for the other systems.

  14. Implatation of MC2 computer code

    International Nuclear Information System (INIS)

    Seehusen, J.; Nair, R.P.K.; Becceneri, J.C.

    1981-01-01

    The implantation of MC2 computer code in the CDC system is presented. The MC2 computer code calculates multigroup cross sections for tipical compositions of fast reactors. The multigroup constants are calculated using solutions of PI or BI approximations for determined buckling value as weighting function. (M.C.K.) [pt

  15. TRACMAB. A computer code to form part of the link between the codes TRAC and MABEL

    International Nuclear Information System (INIS)

    Newbon, S.

    1982-05-01

    This report describes the function of the link program TRACMAB and provides a guide for users. The program is required to convert the thermal disequilibrium data output by the transient code TRAC into equilibrium data in a format compatible with the input data required by the code CAIN which in turn produces input data for MABEL. (author)

  16. FRESCO: fusion reactor simulation code for tokamaks

    International Nuclear Information System (INIS)

    Mantsinen, M.J.

    1995-03-01

    The study of the dynamics of tokamak fusion reactors, a zero-dimensional particle and power balance code FRESCO (Fusion Reactor Simulation Code) has been developed at the Department of Technical Physics of Helsinki University of Technology. The FRESCO code is based on zero-dimensional particle and power balance equations averaged over prescribed plasma profiles. In the report the data structure of the FRESCO code is described, including the description of the COMMON statements, program input, and program output. The general structure of the code is described, including the description of subprograms and functions. The physical model used and examples of the code performance are also included in the report. (121 tabs.) (author)

  17. The 1992 ENDF Pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1992-01-01

    This document summarizes the 1992 version of the ENDF pre-processing codes which are required for processing evaluated nuclear data coded in the format ENDF-4, ENDF-5, or ENDF-6. Included are the codes CONVERT, MERGER, LINEAR, RECENT, SIGMA1, LEGEND, FIXUP, GROUPIE, DICTION, MIXER, VIRGIN, COMPLOT, EVALPLOT, RELABEL. Some of the functions of these codes are: to calculate cross-sections from resonance parameters; to calculate angular distributions, group average, mixtures of cross-sections, etc; to produce graphical plottings and data comparisons. The codes are designed to operate on virtually any type of computer including PC's. They are available from the IAEA Nuclear Data Section, free of charge upon request, on magnetic tape or a set of HD diskettes. (author)

  18. What Froze the Genetic Code?

    Directory of Open Access Journals (Sweden)

    Lluís Ribas de Pouplana

    2017-04-01

    Full Text Available The frozen accident theory of the Genetic Code was a proposal by Francis Crick that attempted to explain the universal nature of the Genetic Code and the fact that it only contains information for twenty amino acids. Fifty years later, it is clear that variations to the universal Genetic Code exist in nature and that translation is not limited to twenty amino acids. However, given the astonishing diversity of life on earth, and the extended evolutionary time that has taken place since the emergence of the extant Genetic Code, the idea that the translation apparatus is for the most part immobile remains true. Here, we will offer a potential explanation to the reason why the code has remained mostly stable for over three billion years, and discuss some of the mechanisms that allow species to overcome the intrinsic functional limitations of the protein synthesis machinery.

  19. What Froze the Genetic Code?

    Science.gov (United States)

    Ribas de Pouplana, Lluís; Torres, Adrian Gabriel; Rafels-Ybern, Àlbert

    2017-04-05

    The frozen accident theory of the Genetic Code was a proposal by Francis Crick that attempted to explain the universal nature of the Genetic Code and the fact that it only contains information for twenty amino acids. Fifty years later, it is clear that variations to the universal Genetic Code exist in nature and that translation is not limited to twenty amino acids. However, given the astonishing diversity of life on earth, and the extended evolutionary time that has taken place since the emergence of the extant Genetic Code, the idea that the translation apparatus is for the most part immobile remains true. Here, we will offer a potential explanation to the reason why the code has remained mostly stable for over three billion years, and discuss some of the mechanisms that allow species to overcome the intrinsic functional limitations of the protein synthesis machinery.

  20. Code REX to fit experimental data to exponential functions and graphics plotting; Codigo REX para ajuste de datos experimentales a funciones exponenciales y su representacion grafica

    Energy Technology Data Exchange (ETDEWEB)

    Romero, L.; Travesi, A.

    1983-07-01

    The REX code, written in Fortran IV, performs the fitting a set of experimental data to different kind of functions as: straight-line (Y = A + BX) , and various exponential type (Y-A B{sup x}, Y=A X{sup B}; Y=A exp(BX) ) , using the Least Squares criterion. Such fitting could be done directly for one selected function of for the our simultaneously and allows to chose the function that best fitting to the data, since presents the statistics data of all the fitting. Further, it presents the graphics plotting, of the fitted function, in the appropriate coordinate axes system. An additional option allows also the Graphic plotting of experimental data used for the fitting. All the data necessary to execute this code are asked to the operator in the terminal screen, in the iterative way by screen-operator dialogue, and the values are introduced through the keyboard. This code could be executed with any computer provided with graphic screen and keyboard terminal, with a X-Y plotter serial connected to the graphics terminal. (Author) 5 refs.

  1. Computer Security: is your code sane?

    CERN Multimedia

    Stefan Lueders, Computer Security Team

    2015-01-01

    How many of us write code? Software? Programs? Scripts? How many of us are properly trained in this and how well do we do it? Do we write functional, clean and correct code, without flaws, bugs and vulnerabilities*? In other words: are our codes sane?   Figuring out weaknesses is not that easy (see our quiz in an earlier Bulletin article). Therefore, in order to improve the sanity of your code, prevent common pit-falls, and avoid the bugs and vulnerabilities that can crash your code, or – worse – that can be misused and exploited by attackers, the CERN Computer Security team has reviewed its recommendations for checking the security compliance of your code. “Static Code Analysers” are stand-alone programs that can be run on top of your software stack, regardless of whether it uses Java, C/C++, Perl, PHP, Python, etc. These analysers identify weaknesses and inconsistencies including: employing undeclared variables; expressions resu...

  2. VOA: a 2-d plasma physics code

    International Nuclear Information System (INIS)

    Eltgroth, P.G.

    1975-12-01

    A 2-dimensional relativistic plasma physics code was written and tested. The non-thermal components of the particle distribution functions are represented by expansion into moments in momentum space. These moments are computed directly from numerical equations. Currently three species are included - electrons, ions and ''beam electrons''. The computer code runs on either the 7600 or STAR machines at LLL. Both the physics and the operation of the code are discussed

  3. Anodizing color coded anodized Ti6Al4V medical devices for increasing bone cell functions

    Directory of Open Access Journals (Sweden)

    Webster TJ

    2013-01-01

    Full Text Available Alexandra P Ross, Thomas J WebsterSchool of Engineering and Department of Orthopedics, Brown University, Providence, RI, USAAbstract: Current titanium-based implants are often anodized in sulfuric acid (H2SO4 for color coding purposes. However, a crucial parameter in selecting the material for an orthopedic implant is the degree to which it will integrate into the surrounding bone. Loosening at the bone–implant interface can cause catastrophic failure when motion occurs between the implant and the surrounding bone. Recently, a different anodization process using hydrofluoric acid has been shown to increase bone growth on commercially pure titanium and titanium alloys through the creation of nanotubes. The objective of this study was to compare, for the first time, the influence of anodizing a titanium alloy medical device in sulfuric acid for color coding purposes, as is done in the orthopedic implant industry, followed by anodizing the device in hydrofluoric acid to implement nanotubes. Specifically, Ti6Al4V model implant samples were anodized first with sulfuric acid to create color-coding features, and then with hydrofluoric acid to implement surface features to enhance osteoblast functions. The material surfaces were characterized by visual inspection, scanning electron microscopy, contact angle measurements, and energy dispersive spectroscopy. Human osteoblasts were seeded onto the samples for a series of time points and were measured for adhesion and proliferation. After 1 and 2 weeks, the levels of alkaline phosphatase activity and calcium deposition were measured to assess the long-term differentiation of osteoblasts into the calcium depositing cells. The results showed that anodizing in hydrofluoric acid after anodizing in sulfuric acid partially retains color coding and creates unique surface features to increase osteoblast adhesion, proliferation, alkaline phosphatase activity, and calcium deposition. In this manner, this study

  4. Code Disentanglement: Initial Plan

    Energy Technology Data Exchange (ETDEWEB)

    Wohlbier, John Greaton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelley, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rockefeller, Gabriel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Calef, Matthew Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-01-27

    The first step to making more ambitious changes in the EAP code base is to disentangle the code into a set of independent, levelized packages. We define a package as a collection of code, most often across a set of files, that provides a defined set of functionality; a package a) can be built and tested as an entity and b) fits within an overall levelization design. Each package contributes one or more libraries, or an application that uses the other libraries. A package set is levelized if the relationships between packages form a directed, acyclic graph and each package uses only packages at lower levels of the diagram (in Fortran this relationship is often describable by the use relationship between modules). Independent packages permit independent- and therefore parallel|development. The packages form separable units for the purposes of development and testing. This is a proven path for enabling finer-grained changes to a complex code.

  5. Automatic coding of online collaboration protocols

    NARCIS (Netherlands)

    Erkens, Gijsbert; Janssen, J.J.H.M.

    2006-01-01

    An automatic coding procedure is described to determine the communicative functions of messages in chat discussions. Five main communicative functions are distinguished: argumentative (indicating a line of argumentation or reasoning), responsive (e.g., confirmations, denials, and answers),

  6. Striatal dopamine release codes uncertainty in pathological gambling

    DEFF Research Database (Denmark)

    Linnet, Jakob; Mouridsen, Kim; Peterson, Ericka

    2012-01-01

    Two mechanisms of midbrain and striatal dopaminergic projections may be involved in pathological gambling: hypersensitivity to reward and sustained activation toward uncertainty. The midbrain—striatal dopamine system distinctly codes reward and uncertainty, where dopaminergic activation is a linear...... function of expected reward and an inverse U-shaped function of uncertainty. In this study, we investigated the dopaminergic coding of reward and uncertainty in 18 pathological gambling sufferers and 16 healthy controls. We used positron emission tomography (PET) with the tracer [11C]raclopride to measure...... dopamine release, and we used performance on the Iowa Gambling Task (IGT) to determine overall reward and uncertainty. We hypothesized that we would find a linear function between dopamine release and IGT performance, if dopamine release coded reward in pathological gambling. If, on the other hand...

  7. Striatal dopamine release codes uncertainty in pathological gambling

    DEFF Research Database (Denmark)

    Linnet, Jakob; Mouridsen, Kim; Peterson, Ericka

    2012-01-01

    Two mechanisms of midbrain and striatal dopaminergic projections may be involved in pathological gambling: hypersensitivity to reward and sustained activation toward uncertainty. The midbrain-striatal dopamine system distinctly codes reward and uncertainty, where dopaminergic activation is a linear...... function of expected reward and an inverse U-shaped function of uncertainty. In this study, we investigated the dopaminergic coding of reward and uncertainty in 18 pathological gambling sufferers and 16 healthy controls. We used positron emission tomography (PET) with the tracer [(11)C......]raclopride to measure dopamine release, and we used performance on the Iowa Gambling Task (IGT) to determine overall reward and uncertainty. We hypothesized that we would find a linear function between dopamine release and IGT performance, if dopamine release coded reward in pathological gambling. If, on the other hand...

  8. Performance evaluation based on data from code reviews

    OpenAIRE

    Andrej, Sekáč

    2016-01-01

    Context. Modern code review tools such as Gerrit have made available great amounts of code review data from different open source projects as well as other commercial projects. Code reviews are used to keep the quality of produced source code under control but the stored data could also be used for evaluation of the software development process. Objectives. This thesis uses machine learning methods for an approximation of review expert’s performance evaluation function. Due to limitations in ...

  9. Improvement of group collapsing in TRANSX code

    International Nuclear Information System (INIS)

    Jeong, Hyun Tae; Kim, Young Cheol; Kim, Young In; Kim, Young Kyun

    1996-07-01

    A cross section generating and processing computer code TRANSX version 2.15 in the K-CORE system, being developed by the KAERI LMR core design technology development team produces various cross section input files appropriated for flux calculation options from the cross section library MATXS. In this report, a group collapsing function of TRANSX has been improved to utilize the zone averaged flux file RZFLUX written in double precision as flux weighting functions. As a result, an iterative calculation system using double precision RZFLUX consisting of the cross section data library file MATXS, the effective cross section producing and processing code TRANSX, and the transport theory calculation code TWODANT has been set up and verified through a sample model calculation. 4 refs. (Author)

  10. Circular codes revisited: a statistical approach.

    Science.gov (United States)

    Gonzalez, D L; Giannerini, S; Rosa, R

    2011-04-21

    In 1996 Arquès and Michel [1996. A complementary circular code in the protein coding genes. J. Theor. Biol. 182, 45-58] discovered the existence of a common circular code in eukaryote and prokaryote genomes. Since then, circular code theory has provoked great interest and underwent a rapid development. In this paper we discuss some theoretical issues related to the synchronization properties of coding sequences and circular codes with particular emphasis on the problem of retrieval and maintenance of the reading frame. Motivated by the theoretical discussion, we adopt a rigorous statistical approach in order to try to answer different questions. First, we investigate the covering capability of the whole class of 216 self-complementary, C(3) maximal codes with respect to a large set of coding sequences. The results indicate that, on average, the code proposed by Arquès and Michel has the best covering capability but, still, there exists a great variability among sequences. Second, we focus on such code and explore the role played by the proportion of the bases by means of a hierarchy of permutation tests. The results show the existence of a sort of optimization mechanism such that coding sequences are tailored as to maximize or minimize the coverage of circular codes on specific reading frames. Such optimization clearly relates the function of circular codes with reading frame synchronization. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  12. Annotating non-coding regions of the genome.

    Science.gov (United States)

    Alexander, Roger P; Fang, Gang; Rozowsky, Joel; Snyder, Michael; Gerstein, Mark B

    2010-08-01

    Most of the human genome consists of non-protein-coding DNA. Recently, progress has been made in annotating these non-coding regions through the interpretation of functional genomics experiments and comparative sequence analysis. One can conceptualize functional genomics analysis as involving a sequence of steps: turning the output of an experiment into a 'signal' at each base pair of the genome; smoothing this signal and segmenting it into small blocks of initial annotation; and then clustering these small blocks into larger derived annotations and networks. Finally, one can relate functional genomics annotations to conserved units and measures of conservation derived from comparative sequence analysis.

  13. Code-modulated interferometric imaging system using phased arrays

    Science.gov (United States)

    Chauhan, Vikas; Greene, Kevin; Floyd, Brian

    2016-05-01

    Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.

  14. Performance measures for transform data coding.

    Science.gov (United States)

    Pearl, J.; Andrews, H. C.; Pratt, W. K.

    1972-01-01

    This paper develops performance criteria for evaluating transform data coding schemes under computational constraints. Computational constraints that conform with the proposed basis-restricted model give rise to suboptimal coding efficiency characterized by a rate-distortion relation R(D) similar in form to the theoretical rate-distortion function. Numerical examples of this performance measure are presented for Fourier, Walsh, Haar, and Karhunen-Loeve transforms.

  15. Functional intersection of ATM and DNA-dependent protein kinase catalytic subunit in coding end joining during V(D)J recombination

    DEFF Research Database (Denmark)

    Lee, Baeck-Seung; Gapud, Eric J; Zhang, Shichuan

    2013-01-01

    V(D)J recombination is initiated by the RAG endonuclease, which introduces DNA double-strand breaks (DSBs) at the border between two recombining gene segments, generating two hairpin-sealed coding ends and two blunt signal ends. ATM and DNA-dependent protein kinase catalytic subunit (DNA-PKcs) ar......V(D)J recombination is initiated by the RAG endonuclease, which introduces DNA double-strand breaks (DSBs) at the border between two recombining gene segments, generating two hairpin-sealed coding ends and two blunt signal ends. ATM and DNA-dependent protein kinase catalytic subunit (DNA......-PKcs) are serine-threonine kinases that orchestrate the cellular responses to DNA DSBs. During V(D)J recombination, ATM and DNA-PKcs have unique functions in the repair of coding DNA ends. ATM deficiency leads to instability of postcleavage complexes and the loss of coding ends from these complexes. DNA...... when ATM is present and its kinase activity is intact. The ability of ATM to compensate for DNA-PKcs kinase activity depends on the integrity of three threonines in DNA-PKcs that are phosphorylation targets of ATM, suggesting that ATM can modulate DNA-PKcs activity through direct phosphorylation of DNA...

  16. Short-lived non-coding transcripts (SLiTs): Clues to regulatory long non-coding RNA.

    Science.gov (United States)

    Tani, Hidenori

    2017-03-22

    Whole transcriptome analyses have revealed a large number of novel long non-coding RNAs (lncRNAs). Although the importance of lncRNAs has been documented in previous reports, the biological and physiological functions of lncRNAs remain largely unknown. The role of lncRNAs seems an elusive problem. Here, I propose a clue to the identification of regulatory lncRNAs. The key point is RNA half-life. RNAs with a long half-life (t 1/2 > 4 h) contain a significant proportion of ncRNAs, as well as mRNAs involved in housekeeping functions, whereas RNAs with a short half-life (t 1/2 regulatory ncRNAs and regulatory mRNAs. This novel class of ncRNAs with a short half-life can be categorized as Short-Lived non-coding Transcripts (SLiTs). I consider that SLiTs are likely to be rich in functionally uncharacterized regulatory RNAs. This review describes recent progress in research into SLiTs.

  17. Automatic code generation in practice

    DEFF Research Database (Denmark)

    Adam, Marian Sorin; Kuhrmann, Marco; Schultz, Ulrik Pagh

    2016-01-01

    -specific language to specify those requirements and to allow for generating a safety-enforcing layer of code, which is deployed to the robot. The paper at hand reports experiences in practically applying code generation to mobile robots. For two cases, we discuss how we addressed challenges, e.g., regarding weaving......Mobile robots often use a distributed architecture in which software components are deployed to heterogeneous hardware modules. Ensuring the consistency with the designed architecture is a complex task, notably if functional safety requirements have to be fulfilled. We propose to use a domain...... code generation into proprietary development environments and testing of manually written code. We find that a DSL based on the same conceptual model can be used across different kinds of hardware modules, but a significant adaptation effort is required in practical scenarios involving different kinds...

  18. Cell-assembly coding in several memory processes.

    Science.gov (United States)

    Sakurai, Y

    1998-01-01

    The present paper discusses why the cell assembly, i.e., an ensemble population of neurons with flexible functional connections, is a tenable view of the basic code for information processes in the brain. The main properties indicating the reality of cell-assembly coding are neurons overlaps among different assemblies and connection dynamics within and among the assemblies. The former can be detected as multiple functions of individual neurons in processing different kinds of information. Individual neurons appear to be involved in multiple information processes. The latter can be detected as changes of functional synaptic connections in processing different kinds of information. Correlations of activity among some of the recorded neurons appear to change in multiple information processes. Recent experiments have compared several different memory processes (tasks) and detected these two main properties, indicating cell-assembly coding of memory in the working brain. The first experiment compared different types of processing of identical stimuli, i.e., working memory and reference memory of auditory stimuli. The second experiment compared identical processes of different types of stimuli, i.e., discriminations of simple auditory, simple visual, and configural auditory-visual stimuli. The third experiment compared identical processes of different types of stimuli with or without temporal processing of stimuli, i.e., discriminations of elemental auditory, configural auditory-visual, and sequential auditory-visual stimuli. Some possible features of the cell-assembly coding, especially "dual coding" by individual neurons and cell assemblies, are discussed for future experimental approaches. Copyright 1998 Academic Press.

  19. Multiplexed coding in the human basal ganglia

    Science.gov (United States)

    Andres, D. S.; Cerquetti, D.; Merello, M.

    2016-04-01

    A classic controversy in neuroscience is whether information carried by spike trains is encoded by a time averaged measure (e.g. a rate code), or by complex time patterns (i.e. a time code). Here we apply a tool to quantitatively analyze the neural code. We make use of an algorithm based on the calculation of the temporal structure function, which permits to distinguish what scales of a signal are dominated by a complex temporal organization or a randomly generated process. In terms of the neural code, this kind of analysis makes it possible to detect temporal scales at which a time patterns coding scheme or alternatively a rate code are present. Additionally, finding the temporal scale at which the correlation between interspike intervals fades, the length of the basic information unit of the code can be established, and hence the word length of the code can be found. We apply this algorithm to neuronal recordings obtained from the Globus Pallidus pars interna from a human patient with Parkinson’s disease, and show that a time pattern coding and a rate coding scheme co-exist at different temporal scales, offering a new example of multiplexed neuronal coding.

  20. Modelling plastic scintillator response to gamma rays using light transport incorporated FLUKA code

    Energy Technology Data Exchange (ETDEWEB)

    Ranjbar Kohan, M. [Physics Department, Tafresh University, Tafresh (Iran, Islamic Republic of); Etaati, G.R. [Department of Nuclear Engineering and Physics, Amir Kabir University of Technology, Tehran (Iran, Islamic Republic of); Ghal-Eh, N., E-mail: ghal-eh@du.ac.ir [School of Physics, Damghan University, Damghan (Iran, Islamic Republic of); Safari, M.J. [Department of Energy Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of); Afarideh, H. [Department of Nuclear Engineering and Physics, Amir Kabir University of Technology, Tehran (Iran, Islamic Republic of); Asadi, E. [Department of Physics, Payam-e-Noor University, Tehran (Iran, Islamic Republic of)

    2012-05-15

    The response function of NE102 plastic scintillator to gamma rays has been simulated using a joint FLUKA+PHOTRACK Monte Carlo code. The multi-purpose particle transport code, FLUKA, has been responsible for gamma transport whilst the light transport code, PHOTRACK, has simulated the transport of scintillation photons through scintillator and lightguide. The simulation results of plastic scintillator with/without light guides of different surface coverings have been successfully verified with experiments. - Highlights: Black-Right-Pointing-Pointer A multi-purpose code (FLUKA) and a light transport code (PHOTRACK) have been linked. Black-Right-Pointing-Pointer The hybrid code has been used to generate the response function of an NE102 scintillator. Black-Right-Pointing-Pointer The simulated response functions exhibit a good agreement with experimental data.

  1. Measurements of stimulated-Raman-scattering-induced tilt in spectral-amplitude-coding optical code-division multiple-access systems

    Science.gov (United States)

    Al-Qazwini, Zaineb A. T.; Abdullah, Mohamad K.; Mokhtar, Makhfudzah B.

    2009-01-01

    We measure the stimulated Raman scattering (SRS)-induced tilt in spectral-amplitude-coding optical code-division multiple-access (SAC-OCDMA) systems as a function of system main parameters (transmission distance, power per chip, and number of users) via computer simulations. The results show that SRS-induced tilt significantly increases as transmission distance, power per chip, or number of users grows.

  2. SCRAM reactivity calculations with the KIKO3D code

    International Nuclear Information System (INIS)

    Hordosy, G.; Kerszturi, A.; Maraczy, Cs.; Temesvari, E.

    1999-01-01

    Discrepancies between calculated static reactivities and measured reactivities evaluated with reactivity meters led to investigating SCRAM with the KIKO3D dynamic code, The time and space dependent neutron flux in the reactor core during the rod drop measurement was calculated by the KIKO3D nodal diffusion code. For calculating the ionisation chamber signals the Green function technique was applied. The Green functions of ionisation chambers were evaluated via solving the neutron transport equation in the reflector regions with the MCNP Monte Carlo code. The detector signals during asymmetric SCRAM measurements were calculated and compared with measured data using the inverse point kinetics transformation. The sufficient agreement validates the KIKO3D code to determine the reactivities after SCRAM. (Authors)

  3. ABINIT: a computer code for matter; Abinit: un code au service de la matiere

    Energy Technology Data Exchange (ETDEWEB)

    Amadon, B.; Bottin, F.; Bouchet, J.; Dewaele, A.; Jollet, F.; Jomard, G.; Loubeyre, P.; Mazevet, S.; Recoules, V.; Torrent, M.; Zerah, G. [CEA Bruyeres-le-Chatel, 91 (France)

    2008-07-01

    The PAW (Projector Augmented Wave) method has been implemented in the ABINIT Code that computes electronic structures in atoms. This method relies on the simultaneous use of a set of auxiliary functions (in plane waves) and a sphere around each atom. This method allows the computation of systems including many atoms and gives the expression of energy, forces, stress... in terms of the auxiliary function only. We have generated atomic data for iron at very high pressure (over 200 GPa). We get a bcc-hcp transition around 10 GPa and the magnetic order disappears around 50 GPa. This method has been validated on a series of metals. The development of the PAW method has required a great effort for the massive parallelization of the ABINIT code. (A.C.)

  4. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  5. Simulation and interpretation codes for the JET ECE diagnostic. Part 1: physics of the codes' operation

    International Nuclear Information System (INIS)

    Bartlett, D.V.

    1983-06-01

    The codes which have been developed for the analysis of electron cyclotron emission measurements in JET are described. Their principal function is to interpret the spectra measured by the diagnostic so as to give the spatial distribution of the electron temperature in the poloidal cross-section. Various systematic effects in the data are corrected using look-up tables generated by an elaborate simulation code. The part of this code responsible for the accurate calculation of single-pass emission and refraction has been written at CNR-Milan and is described in a separate report. The present report is divided into two parts. This first part describes the methods used for the simulation and interpretation of spectra, the physical/mathematical basis of the codes written at CEA-Fontenay and presents some illustrative results

  6. Revised SRAC code system

    International Nuclear Information System (INIS)

    Tsuchihashi, Keichiro; Ishiguro, Yukio; Kaneko, Kunio; Ido, Masaru.

    1986-09-01

    Since the publication of JAERI-1285 in 1983 for the preliminary version of the SRAC code system, a number of additions and modifications to the functions have been made to establish an overall neutronics code system. Major points are (1) addition of JENDL-2 version of data library, (2) a direct treatment of doubly heterogeneous effect on resonance absorption, (3) a generalized Dancoff factor, (4) a cell calculation based on the fixed boundary source problem, (5) the corresponding edit required for experimental analysis and reactor design, (6) a perturbation theory calculation for reactivity change, (7) an auxiliary code for core burnup and fuel management, etc. This report is a revision of the users manual which consists of the general description, input data requirements and their explanation, detailed information on usage, mathematics, contents of libraries and sample I/O. (author)

  7. Development of ADINA-J-integral code

    International Nuclear Information System (INIS)

    Kurihara, Ryoichi

    1988-07-01

    A general purpose finite element program ADINA (Automatic Dynamic Incremental Nonlinear Analysis), which was developed by Bathe et al., was revised to be able to calculate the J- and J-integral. This report introduced the numerical method to add this capability to the code, and the evaluation of the revised ADINA-J code by using a few of examples of the J estimation model, i.e. a compact tension specimen, a center cracked panel subjected to dynamic load, and a thick shell cylinder having inner axial crack subjected to thermal load. The evaluation testified the function of the revised code. (author)

  8. Two-dimensional sensitivity calculation code: SENSETWO

    International Nuclear Information System (INIS)

    Yamauchi, Michinori; Nakayama, Mitsuo; Minami, Kazuyoshi; Seki, Yasushi; Iida, Hiromasa.

    1979-05-01

    A SENSETWO code for the calculation of cross section sensitivities with a two-dimensional model has been developed, on the basis of first order perturbation theory. It uses forward neutron and/or gamma-ray fluxes and adjoint fluxes obtained by two-dimensional discrete ordinates code TWOTRAN-II. The data and informations of cross sections, geometry, nuclide density, response functions, etc. are transmitted to SENSETWO by the dump magnetic tape made in TWOTRAN calculations. The required input for SENSETWO calculations is thus very simple. The SENSETWO yields as printed output the cross section sensitivities for each coarse mesh zone and for each energy group, as well as the plotted output of sensitivity profiles specified by the input. A special feature of the code is that it also calculates the reaction rate with the response function used as the adjoint source in TWOTRAN adjoint calculation and the calculated forward flux from the TWOTRAN forward calculation. (author)

  9. Research on Primary Shielding Calculation Source Generation Codes

    Science.gov (United States)

    Zheng, Zheng; Mei, Qiliang; Li, Hui; Shangguan, Danhua; Zhang, Guangchun

    2017-09-01

    Primary Shielding Calculation (PSC) plays an important role in reactor shielding design and analysis. In order to facilitate PSC, a source generation code is developed to generate cumulative distribution functions (CDF) for the source particle sample code of the J Monte Carlo Transport (JMCT) code, and a source particle sample code is deveoped to sample source particle directions, types, coordinates, energy and weights from the CDFs. A source generation code is developed to transform three dimensional (3D) power distributions in xyz geometry to source distributions in r θ z geometry for the J Discrete Ordinate Transport (JSNT) code. Validation on PSC model of Qinshan No.1 nuclear power plant (NPP), CAP1400 and CAP1700 reactors are performed. Numerical results show that the theoretical model and the codes are both correct.

  10. Construction of Capacity Achieving Lattice Gaussian Codes

    KAUST Repository

    Alghamdi, Wael

    2016-04-01

    We propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3].

  11. Development of a code in three-dimensional cylindrical geometry based on analytic function expansion nodal (AFEN) method

    International Nuclear Information System (INIS)

    Lee, Joo Hee

    2006-02-01

    There is growing interest in developing pebble bed reactors (PBRs) as a candidate of very high temperature gas-cooled reactors (VHTRs). Until now, most existing methods of nuclear design analysis for this type of reactors are base on old finite-difference solvers or on statistical methods. But for realistic analysis of PBRs, there is strong desire of making available high fidelity nodal codes in three-dimensional (r,θ,z) cylindrical geometry. Recently, the Analytic Function Expansion Nodal (AFEN) method developed quite extensively in Cartesian (x,y,z) geometry and in hexagonal-z geometry was extended to two-group (r,z) cylindrical geometry, and gave very accurate results. In this thesis, we develop a method for the full three-dimensional cylindrical (r,θ,z) geometry and implement the method into a code named TOPS. The AFEN methodology in this geometry as in hexagonal geometry is 'robus' (e.g., no occurrence of singularity), due to the unique feature of the AFEN method that it does not use the transverse integration. The transverse integration in the usual nodal methods, however, leads to an impasse, that is, failure of the azimuthal term to be transverse-integrated over r-z surface. We use 13 nodal unknowns in an outer node and 7 nodal unknowns in an innermost node. The general solution of the node can be expressed in terms of that nodal unknowns, and can be updated using the nodal balance equation and the current continuity condition. For more realistic analysis of PBRs, we implemented em Marshak boundary condition to treat the incoming current zero boundary condition and the partial current translation (PCT) method to treat voids in the core. The TOPS code was verified in the various numerical tests derived from Dodds problem and PBMR-400 benchmark problem. The results of the TOPS code show high accuracy and fast computing time than the VENTURE code that is based on finite difference method (FDM)

  12. Preliminary Coupling of MATRA Code for Multi-physics Analysis

    International Nuclear Information System (INIS)

    Kim, Seongjin; Choi, Jinyoung; Yang, Yongsik; Kwon, Hyouk; Hwang, Daehyun

    2014-01-01

    The boundary conditions such as the inlet temperature, mass flux, averaged heat flux, power distributions of the rods, and core geometry is given by constant values or functions of time. These conditions are separately calculated and provided by other codes, such as a neutronics or a system codes, into the MATRA code. In addition, the coupling of several codes in the different physics field is focused and embodied. In this study, multiphysics coupling methods were developed for a subchannel code (MATRA) with neutronics codes (MASTER, DeCART) and a fuel performance code (FRAPCON-3). Preliminary evaluation results for representative sample cases are presented. The MASTER and DeCART codes provide the power distribution of the rods in the core to the MATRA code. In case of the FRAPCON-3 code, the variation of the rod diameter induced by the thermal expansion is yielded and provided. The MATRA code transfers the thermal-hydraulic conditions that each code needs. Moreover, the coupling method with each code is described

  13. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  14. Truncation Depth Rule-of-Thumb for Convolutional Codes

    Science.gov (United States)

    Moision, Bruce

    2009-01-01

    In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.

  15. Code-Switching Functions in Modern Hebrew Teaching and Learning

    Science.gov (United States)

    Gilead, Yona

    2016-01-01

    The teaching and learning of Modern Hebrew outside of Israel is essential to Jewish education and identity. One of the most contested issues in Modern Hebrew pedagogy is the use of code-switching between Modern Hebrew and learners' first language. Moreover, this is one of the longest running disputes in the broader field of second language…

  16. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  17. Information preserving coding for multispectral data

    Science.gov (United States)

    Duan, J. R.; Wintz, P. A.

    1973-01-01

    A general formulation of the data compression system is presented. A method of instantaneous expansion of quantization levels by reserving two codewords in the codebook to perform a folding over in quantization is implemented for error free coding of data with incomplete knowledge of the probability density function. Results for simple DPCM with folding and an adaptive transform coding technique followed by a DPCM technique are compared using ERTS-1 data.

  18. MKENO-DAR: a direct angular representation Monte Carlo code for criticality safety analysis

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Komuro, Yuichi; Tsunoo, Yukiyasu; Nakayama, Mitsuo.

    1984-03-01

    Improving the Monte Carlo code MULTI-KENO, the MKENO-DAR (Direct Angular Representation) code has been developed for criticality safety analysis in detail. A function was added to MULTI-KENO for representing anisotropic scattering strictly. With this function, the scattering angle of neutron is determined not by the average scattering angle μ-bar of the Pl Legendre polynomial but by the random work operation using probability distribution function produced with the higher order Legendre polynomials. This code is avilable for the FACOM-M380 computer. This report is a computer code manual for MKENO-DAR. (author)

  19. FREQUENCY ANALYSIS OF RLE-BLOCKS REPETITIONS IN THE SERIES OF BINARY CODES WITH OPTIMAL MINIMAX CRITERION OF AUTOCORRELATION FUNCTION

    Directory of Open Access Journals (Sweden)

    A. A. Kovylin

    2013-01-01

    Full Text Available The article describes the problem of searching for binary pseudo-random sequences with quasi-ideal autocorrelation function, which are to be used in contemporary communication systems, including mobile and wireless data transfer interfaces. In the synthesis of binary sequences sets, the target set is manning them based on the minimax criterion by which a sequence is considered to be optimal according to the intended application. In the course of the research the optimal sequences with order of up to 52 were obtained; the analysis of Run Length Encoding was carried out. The analysis showed regularities in the distribution of series number of different lengths in the codes that are optimal on the chosen criteria, which would make it possible to optimize the searching process for such codes in the future.

  20. Development of a BWR core burn-up calculation code COREBN-BWR

    International Nuclear Information System (INIS)

    Morimoto, Yuichi; Okumura, Keisuke

    1992-05-01

    In order to evaluate core performances of BWR type reactors, the three dimensional core burnup calculation code COREBN-BWR and the fuel management code HIST-BWR have been developed. In analyses of BWR type reactors, thermal hydraulics calculations must be coupled with neutronics calculations to evaluate core performances, because steam void distribution changes according to the change of the power distribution. By installing new functions as follows to the three dimensional core burnup code COREBN2 developed in JAERI for PWR type reactor analyses, the code system becomes to be applicable to burnup analyses of BWR type reactors. (1) Macroscopic cross section calculation function taking into account of coolant void distribution. (2) Thermal hydraulics calculation function to evaluate core flow split, coolant void distribution and thermal margin. (3) Burnup calculation function under the Haling strategy. (4) Fuel management function to incorporate the thermal hydraulics information. This report consists of the general description, calculational models, input data requirements and their explanations, detailed information on usage and sample input. (author)

  1. Development of a neutronics code based on analytic function expansion nodal method for pebble-type High Temperature Gas-cooled Reactor design

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Nam Zin; Lee, Joo Hee; Lee, Jae Jun; Yu, Hui; Lee, Gil Soo [Korea Advanced Institute of Science and Tehcnology, Daejeon (Korea, Republic of)

    2006-03-15

    There is growing interest in developing Pebble Bed Reactors(PBRs) as a candidate of Very High Temperature gas-cooled Reactors(VHTRs). Until now, most existing methods of nuclear design analysis for this type of reactors are base on old finite-difference solvers or on statistical methods. And other existing nodal cannot be adapted for this kind of reactors because of transverse integration problem. In this project, we developed the TOPS code in three dimensional cylindrical geometry based on Analytic Function Expansion Nodal (AFEN) method developed at KAIST. The TOPS code showed better results in computing time than FDM and MCNP. Also TOPS showed very accurate results in reactor analysis.

  2. Development of a neutronics code based on analytic function expansion nodal method for pebble-type High Temperature Gas-cooled Reactor design

    International Nuclear Information System (INIS)

    Cho, Nam Zin; Lee, Joo Hee; Lee, Jae Jun; Yu, Hui; Lee, Gil Soo

    2006-03-01

    There is growing interest in developing Pebble Bed Reactors(PBRs) as a candidate of Very High Temperature gas-cooled Reactors(VHTRs). Until now, most existing methods of nuclear design analysis for this type of reactors are base on old finite-difference solvers or on statistical methods. And other existing nodal cannot be adapted for this kind of reactors because of transverse integration problem. In this project, we developed the TOPS code in three dimensional cylindrical geometry based on Analytic Function Expansion Nodal (AFEN) method developed at KAIST. The TOPS code showed better results in computing time than FDM and MCNP. Also TOPS showed very accurate results in reactor analysis

  3. Status of reactor physics activities on cross section generation and functionalization for the prismatic very high temperature reactor, and development of spatially-heterogeneous codes

    International Nuclear Information System (INIS)

    Lee, C. H.; Zhong, Z.; Taiwo, T. A.; Yang, W. S.; Smith, M. A.; Palmiotti, G.

    2006-01-01

    The cross section generation methodology and procedure for design and analysis of the prismatic Very High Temperature Gas-cooled Reactor (VHTR) core have been addressed for the DRAGON and REBUS-3/DIF3D code suite. Approaches for tabulation and functionalization of cross sections have been investigated and implemented. The cross sections are provided at different burnup and fuel and moderator temperature states. In the tabulation approach, the multigroup cross sections are tabulated as a function of the state variables so that a cross section file is able to cover the range of core operating conditions. Cross sections for points between tabulated data points are fitted simply by linear interpolation. For the functionalization approach, an investigation of the applicability of quadratic polynomials and linear coupling for fuel and moderator temperature changes has been conducted, based on the observation that cross sections are monotonically changing with fuel or moderator temperatures. Preliminary results show that the functionalization makes it possible to cover a wide range of operating temperature conditions with only six sets of data per burnup, while maintaining a good accuracy and significantly reducing the size of the cross section file. In these approaches, the number of fission products has been minimized to a few nuclides (I/Xe/Pm/Sm and a lumped fission product) to reduce the overall computation time without sacrificing solution accuracy. Discontinuity factors (DFs) based on nodal equivalence theory have been introduced to accurately represent the significant change in neutron spectrum at the interface of the fuel and reflector regions as well as between different fuel blocks (e.g., fuel elements with burnable poisons or control rods). Using the DRAGON code, procedures have been established for generating cross sections for fuel and reflector blocks with and without control absorbers. The preliminary results indicate that the solution accuracy is improved

  4. The personification of animals: coding of human and nonhuman body parts based on posture and function.

    Science.gov (United States)

    Welsh, Timothy N; McDougall, Laura; Paulson, Stephanie

    2014-09-01

    The purpose of the present research was to determine how humans represent the bodies and limbs of nonhuman mammals based on anatomical and functional properties. To this end, participants completed a series of body-part compatibility tasks in which they responded with a thumb or foot response to the color of a stimulus (red or blue, respectively) presented on different limbs of several animals. Across the studies, this compatibility task was conducted with images of human and nonhuman animals (bears, cows, and monkeys) in bipedal or quadrupedal postures. The results revealed that the coding of the limbs of nonhuman animals is strongly influenced by the posture of the body, but not the functional capacity of the limb. Specifically, body-part compatibility effects were present for both human and nonhuman animals when the figures were in a bipedal posture, but were not present when the animals were in a quadrupedal stance (Experiments 1a-c). Experiments 2a and 2b revealed that the posture-based body-part compatibility effects were not simply a vertical spatial compatibility effect or due to a mismatch between the posture of the body in the image and the participant. These data indicate that nonhuman animals in a bipedal posture are coded with respect to the "human" body representation, whereas nonhuman animals in a quadrupedal posture are not mapped to the human body representation. Overall, these studies provide new insight into the processes through which humans understand, mimic, and learn from the actions of nonhuman animals. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Calculation code NIRVANA for free boundary MHD equilibrium

    International Nuclear Information System (INIS)

    Ninomiya, Hiromasa; Suzuki, Yasuo; Kameari, Akihisa

    1975-03-01

    The calculation method and code of solving the free boundary problem for MHD equilibrium has been developed. Usage of the code ''NIRVANA'' is described. The toroidal plasma current density determined as a function of the flux function PSI is substituted by a group of the ring currents, whereby the equation of MHD equilibrium is transformed into an integral equation. Either of the two iterative methods is chosen to solve the integral equation, depending on the assumptions made of the plasma surface points. Calculation of the magnetic field configurations is possible when the plasma surface coincides self-consistently with the magnetic flux including the separatrix points. The code is usable in calculation of the circular or non-circular shell-less Tokamak equilibrium. (auth.)

  6. A multiobjective approach to the genetic code adaptability problem.

    Science.gov (United States)

    de Oliveira, Lariza Laura; de Oliveira, Paulo S L; Tinós, Renato

    2015-02-19

    The organization of the canonical code has intrigued researches since it was first described. If we consider all codes mapping the 64 codes into 20 amino acids and one stop codon, there are more than 1.51×10(84) possible genetic codes. The main question related to the organization of the genetic code is why exactly the canonical code was selected among this huge number of possible genetic codes. Many researchers argue that the organization of the canonical code is a product of natural selection and that the code's robustness against mutations would support this hypothesis. In order to investigate the natural selection hypothesis, some researches employ optimization algorithms to identify regions of the genetic code space where best codes, according to a given evaluation function, can be found (engineering approach). The optimization process uses only one objective to evaluate the codes, generally based on the robustness for an amino acid property. Only one objective is also employed in the statistical approach for the comparison of the canonical code with random codes. We propose a multiobjective approach where two or more objectives are considered simultaneously to evaluate the genetic codes. In order to test our hypothesis that the multiobjective approach is useful for the analysis of the genetic code adaptability, we implemented a multiobjective optimization algorithm where two objectives are simultaneously optimized. Using as objectives the robustness against mutation with the amino acids properties polar requirement (objective 1) and robustness with respect to hydropathy index or molecular volume (objective 2), we found solutions closer to the canonical genetic code in terms of robustness, when compared with the results using only one objective reported by other authors. Using more objectives, more optimal solutions are obtained and, as a consequence, more information can be used to investigate the adaptability of the genetic code. The multiobjective approach

  7. Stress-intensity factors for surface cracks in pipes: a computer code for evaluation by use of influence functions. Final report

    International Nuclear Information System (INIS)

    Dedhia, D.D.; Harris, D.O.

    1982-06-01

    A user-oriented computer program for the evaluation of stress intensity factors for cracks in pipes is presented. Stress intensity factors for semi-elliptical, complete circumferential and long longitudinal cracks can be obtained using this computer program. The code is based on the method of influence functions which makes it possible to treat arbitrary stresses on the plane of the crack. The stresses on the crack plane can be entered as a mathematical or tabulated function. A user's manual is included in this report. Background information is also included

  8. Evaluation Codes from an Affine Veriety Code Perspective

    DEFF Research Database (Denmark)

    Geil, Hans Olav

    2008-01-01

    Evaluation codes (also called order domain codes) are traditionally introduced as generalized one-point geometric Goppa codes. In the present paper we will give a new point of view on evaluation codes by introducing them instead as particular nice examples of affine variety codes. Our study...... includes a reformulation of the usual methods to estimate the minimum distances of evaluation codes into the setting of affine variety codes. Finally we describe the connection to the theory of one-pointgeometric Goppa codes. Contents 4.1 Introduction...... . . . . . . . . . . . . . . . . . . . . . . . 171 4.9 Codes form order domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.10 One-point geometric Goppa codes . . . . . . . . . . . . . . . . . . . . . . . . 176 4.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 References...

  9. A Realistic Model under which the Genetic Code is Optimal

    NARCIS (Netherlands)

    Buhrman, H.; van der Gulik, P.T.S.; Klau, G.W.; Schaffner, C.; Speijer, D.; Stougie, L.

    2013-01-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By

  10. APC: A new code for Atmospheric Polarization Computations

    International Nuclear Information System (INIS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2013-01-01

    A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface. -- Highlights: •A new code, APC, has been developed. •The code was validated against well-known codes. •The BPDF for an arbitrary Mueller matrix is computed

  11. Comparison of PSF maxima and minima of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems

    Science.gov (United States)

    Ratnam, Challa; Lakshmana Rao, Vadlamudi; Lachaa Goud, Sivagouni

    2006-10-01

    In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the point spread function are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper.

  12. Comparison of PSF maxima and minima of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems

    International Nuclear Information System (INIS)

    Ratnam, Challa; Rao, Vadlamudi Lakshmana; Goud, Sivagouni Lachaa

    2006-01-01

    In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the point spread function are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper

  13. Covariance data processing code. ERRORJ

    International Nuclear Information System (INIS)

    Kosako, Kazuaki

    2001-01-01

    The covariance data processing code, ERRORJ, was developed to process the covariance data of JENDL-3.2. ERRORJ has the processing functions of covariance data for cross sections including resonance parameters, angular distribution and energy distribution. (author)

  14. Statistical mechanics of low-density parity-check codes

    Energy Technology Data Exchange (ETDEWEB)

    Kabashima, Yoshiyuki [Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama 2268502 (Japan); Saad, David [Neural Computing Research Group, Aston University, Birmingham B4 7ET (United Kingdom)

    2004-02-13

    We review recent theoretical progress on the statistical mechanics of error correcting codes, focusing on low-density parity-check (LDPC) codes in general, and on Gallager and MacKay-Neal codes in particular. By exploiting the relation between LDPC codes and Ising spin systems with multi-spin interactions, one can carry out a statistical mechanics based analysis that determines the practical and theoretical limitations of various code constructions, corresponding to dynamical and thermodynamical transitions, respectively, as well as the behaviour of error-exponents averaged over the corresponding code ensemble as a function of channel noise. We also contrast the results obtained using methods of statistical mechanics with those derived in the information theory literature, and show how these methods can be generalized to include other channel types and related communication problems. (topical review)

  15. Statistical mechanics of low-density parity-check codes

    International Nuclear Information System (INIS)

    Kabashima, Yoshiyuki; Saad, David

    2004-01-01

    We review recent theoretical progress on the statistical mechanics of error correcting codes, focusing on low-density parity-check (LDPC) codes in general, and on Gallager and MacKay-Neal codes in particular. By exploiting the relation between LDPC codes and Ising spin systems with multi-spin interactions, one can carry out a statistical mechanics based analysis that determines the practical and theoretical limitations of various code constructions, corresponding to dynamical and thermodynamical transitions, respectively, as well as the behaviour of error-exponents averaged over the corresponding code ensemble as a function of channel noise. We also contrast the results obtained using methods of statistical mechanics with those derived in the information theory literature, and show how these methods can be generalized to include other channel types and related communication problems. (topical review)

  16. An Optimal Linear Coding for Index Coding Problem

    OpenAIRE

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  17. ETF system code: composition and applications

    International Nuclear Information System (INIS)

    Reid, R.L.; Wu, K.F.

    1980-01-01

    A computer code has been developed for application to ETF tokamak system and conceptual design studies. The code determines cost, performance, configuration, and technology requirements as a function of tokamak parameters. The ETF code is structured in a modular fashion in order to allow independent modeling of each major tokamak component. The primary benefit of modularization is that it allows updating of a component module, such as the TF coil module, without disturbing the remainder of the system code as long as the input/output to the modules remains unchanged. The modules may be run independently to perform specific design studies, such as determining the effect of allowable strain on TF coil structural requirements, or the modules may be executed together as a system to determine global effects, such as defining the impact of aspect ratio on the entire tokamak system

  18. The histone codes for meiosis.

    Science.gov (United States)

    Wang, Lina; Xu, Zhiliang; Khawar, Muhammad Babar; Liu, Chao; Li, Wei

    2017-09-01

    Meiosis is a specialized process that produces haploid gametes from diploid cells by a single round of DNA replication followed by two successive cell divisions. It contains many special events, such as programmed DNA double-strand break (DSB) formation, homologous recombination, crossover formation and resolution. These events are associated with dynamically regulated chromosomal structures, the dynamic transcriptional regulation and chromatin remodeling are mainly modulated by histone modifications, termed 'histone codes'. The purpose of this review is to summarize the histone codes that are required for meiosis during spermatogenesis and oogenesis, involving meiosis resumption, meiotic asymmetric division and other cellular processes. We not only systematically review the functional roles of histone codes in meiosis but also discuss future trends and perspectives in this field. © 2017 Society for Reproduction and Fertility.

  19. The θ-γ neural code.

    Science.gov (United States)

    Lisman, John E; Jensen, Ole

    2013-03-20

    Theta and gamma frequency oscillations occur in the same brain regions and interact with each other, a process called cross-frequency coupling. Here, we review evidence for the following hypothesis: that the dual oscillations form a code for representing multiple items in an ordered way. This form of coding has been most clearly demonstrated in the hippocampus, where different spatial information is represented in different gamma subcycles of a theta cycle. Other experiments have tested the functional importance of oscillations and their coupling. These involve correlation of oscillatory properties with memory states, correlation with memory performance, and effects of disrupting oscillations on memory. Recent work suggests that this coding scheme coordinates communication between brain regions and is involved in sensory as well as memory processes. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Machine function based control code algebras

    NARCIS (Netherlands)

    Bergstra, J.A.

    Machine functions have been introduced by Earley and Sturgis in [6] in order to provide a mathematical foundation of the use of the T-diagrams proposed by Bratman in [5]. Machine functions describe the operation of a machine at a very abstract level. A theory of hardware and software based on

  1. Reparable Item Supply-Readiness Assessment Using MICAP Data

    Science.gov (United States)

    1984-05-01

    behavior of supply that is captured by Dyna-METRIC. The S-R space makes it quite easy for those who set policy and make decisions to get a...all that common. Obviously the consequence of a no-cann policy would be an intolerable NMCS rate. So cannibalization must be a way of life in such a...tACDrAiorAr^.—I h^LAr^cnLA.—ihACJii—ir-^ crLA-3’a~»-HUD00rALALA crcncncjiuDCO’—ir-^ cncD OCDCDCDOCD*—tCDCDi-H ID ^A o to LD t^-sr-a-hA LA CD-3-rAr^CD003

  2. Plotting system for the MINCS code

    International Nuclear Information System (INIS)

    Watanabe, Tadashi

    1990-08-01

    The plotting system for the MINCS code is described. The transient two-phase flow analysis code MINCS has been developed to provide a computational tool for analysing various two-phase flow phenomena in one-dimensional ducts. Two plotting systems, namely the SPLPLOT system and the SDPLOT system, can be used as the plotting functions. The SPLPLOT system is used for plotting time transients of variables, while the SDPLOT system is for spatial distributions. The SPLPLOT system is based on the SPLPACK system, which is used as a general tool for plotting results of transient analysis codes or experiments. The SDPLOT is based on the GPLP program, which is also regarded as one of the general plotting programs. In the SPLPLOT and the SDPLOT systems, the standardized data format called the SPL format is used in reading data to be plotted. The output data format of MINCS is translated into the SPL format by using the conversion system called the MINTOSPL system. In this report, how to use the plotting functions is described. (author)

  3. Efficient DS-UWB MUD Algorithm Using Code Mapping and RVM

    Directory of Open Access Journals (Sweden)

    Pingyan Shi

    2016-01-01

    Full Text Available A hybrid multiuser detection (MUD using code mapping and a wrong code recognition based on relevance vector machine (RVM for direct sequence ultra wide band (DS-UWB system is developed to cope with the multiple access interference (MAI and the computational efficiency. A new MAI suppression mechanism is studied in the following steps: firstly, code mapping, an optimal decision function, is constructed and the output candidate code of the matched filter is mapped to a feature space by the function. In the feature space, simulation results show that the error codes caused by MAI and the single user mapped codes can be classified by a threshold which is related to SNR of the receiver. Then, on the base of code mapping, use RVM to distinguish the wrong codes from the right ones and finally correct them. Compared with the traditional MUD approaches, the proposed method can considerably improve the bit error ratio (BER performance due to its special MAI suppression mechanism. Simulation results also show that the proposed method can approximately achieve the BER performance of optimal multiuser detection (OMD and the computational complexity approximately equals the matched filter. Moreover, the proposed method is less sensitive to the number of users.

  4. Library system for a one dimensional tokamak transport code: (LIBJT60), 1

    International Nuclear Information System (INIS)

    Hirayama, Toshio

    1982-12-01

    A library system is developed to control and manage huge programs in terms of FORTRAN source. It is applied to widely used one dimensional tokamak transport codes (LIBJT60), which have been developed in the Division of Large Tokamak Development. The structure of data and program in the transport code turn out to be flexible enough to respond to various demands and this gigantic code frame work can be decomposed into groups of a compact code with a specific function. Some editing support tools for programming and debugging are also developed to save programming work. By applying this library system, users can obtain a code whose functions can be efficiently developed. (author)

  5. An object-oriented scripting interface to a legacy electronic structure code

    DEFF Research Database (Denmark)

    Bahn, Sune Rastad; Jacobsen, Karsten Wedel

    2002-01-01

    The authors have created an object-oriented scripting interface to a mature density functional theory code. The interface gives users a high-level, flexible handle on the code without rewriting the underlying number-crunching code. The authors also discuss design issues and the advantages...

  6. Adaptable recursive binary entropy coding technique

    Science.gov (United States)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2002-07-01

    We present a novel data compression technique, called recursive interleaved entropy coding, that is based on recursive interleaving of variable-to variable length binary source codes. A compression module implementing this technique has the same functionality as arithmetic coding and can be used as the engine in various data compression algorithms. The encoder compresses a bit sequence by recursively encoding groups of bits that have similar estimated statistics, ordering the output in a way that is suited to the decoder. As a result, the decoder has low complexity. The encoding process for our technique is adaptable in that each bit to be encoded has an associated probability-of-zero estimate that may depend on previously encoded bits; this adaptability allows more effective compression. Recursive interleaved entropy coding may have advantages over arithmetic coding, including most notably the admission of a simple and fast decoder. Much variation is possible in the choice of component codes and in the interleaving structure, yielding coder designs of varying complexity and compression efficiency; coder designs that achieve arbitrarily small redundancy can be produced. We discuss coder design and performance estimation methods. We present practical encoding and decoding algorithms, as well as measured performance results.

  7. On quadratic residue codes and hyperelliptic curves

    Directory of Open Access Journals (Sweden)

    David Joyner

    2008-01-01

    Full Text Available For an odd prime p and each non-empty subset S⊂GF(p, consider the hyperelliptic curve X S defined by y 2 =f S (x, where f S (x = ∏ a∈S (x-a. Using a connection between binary quadratic residue codes and hyperelliptic curves over GF(p, this paper investigates how coding theory bounds give rise to bounds such as the following example: for all sufficiently large primes p there exists a subset S⊂GF(p for which the bound |X S (GF(p| > 1.39p holds. We also use the quasi-quadratic residue codes defined below to construct an example of a formally self-dual optimal code whose zeta function does not satisfy the ``Riemann hypothesis.''

  8. Validation of a Node-Centered Wall Function Model for the Unstructured Flow Code FUN3D

    Science.gov (United States)

    Carlson, Jan-Renee; Vasta, Veer N.; White, Jeffery

    2015-01-01

    In this paper, the implementation of two wall function models in the Reynolds averaged Navier-Stokes (RANS) computational uid dynamics (CFD) code FUN3D is described. FUN3D is a node centered method for solving the three-dimensional Navier-Stokes equations on unstructured computational grids. The first wall function model, based on the work of Knopp et al., is used in conjunction with the one-equation turbulence model of Spalart-Allmaras. The second wall function model, also based on the work of Knopp, is used in conjunction with the two-equation k-! turbulence model of Menter. The wall function models compute the wall momentum and energy flux, which are used to weakly enforce the wall velocity and pressure flux boundary conditions in the mean flow momentum and energy equations. These wall conditions are implemented in an implicit form where the contribution of the wall function model to the Jacobian are also included. The boundary conditions of the turbulence transport equations are enforced explicitly (strongly) on all solid boundaries. The use of the wall function models is demonstrated on four test cases: a at plate boundary layer, a subsonic di user, a 2D airfoil, and a 3D semi-span wing. Where possible, different near-wall viscous spacing tactics are examined. Iterative residual convergence was obtained in most cases. Solution results are compared with theoretical and experimental data for several variations of grid spacing. In general, very good comparisons with data were achieved.

  9. An approach to improving the structure of error-handling code in the linux kernel

    DEFF Research Database (Denmark)

    Saha, Suman; Lawall, Julia; Muller, Gilles

    2011-01-01

    The C language does not provide any abstractions for exception handling or other forms of error handling, leaving programmers to devise their own conventions for detecting and handling errors. The Linux coding style guidelines suggest placing error handling code at the end of each function, where...... an automatic program transformation that transforms error-handling code into this style. We have applied our transformation to the Linux 2.6.34 kernel source code, on which it reorganizes the error handling code of over 1800 functions, in about 25 minutes....

  10. Users' manual for the FTDRAW (Fault Tree Draw) code

    International Nuclear Information System (INIS)

    Oikawa, Tetsukuni; Hikawa, Michihiro; Tanabe, Syuichi; Nakamura, Norihiro

    1985-02-01

    This report provides the information needed to use the FTDRAW (Fault Tree Draw) code, which is designed for drawing a fault tree. The FTDRAW code has several optional functions, such as the overview of a fault tree output, fault tree output in English description, fault tree output in Japanese description and summary tree output. Inputs for the FTDRAW code are component failure rate information and gate information which are filed out by a execution of the FTA-J (Fault Tree Analysis-JAERI) code system and option control data. Using the FTDRAW code, we can get drawings of fault trees which is easy to see, efficiently. (author)

  11. SWAT4.0 - The integrated burnup code system driving continuous energy Monte Carlo codes MVP, MCNP and deterministic calculation code SRAC

    International Nuclear Information System (INIS)

    Kashima, Takao; Suyama, Kenya; Takada, Tomoyuki

    2015-03-01

    There have been two versions of SWAT depending on details of its development history: the revised SWAT that uses the deterministic calculation code SRAC as a neutron transportation solver, and the SWAT3.1 that uses the continuous energy Monte Carlo code MVP or MCNP5 for the same purpose. It takes several hours, however, to execute one calculation by the continuous energy Monte Carlo code even on the super computer of the Japan Atomic Energy Agency. Moreover, two-dimensional burnup calculation is not practical using the revised SWAT because it has problems on production of effective cross section data and applying them to arbitrary fuel geometry when a calculation model has multiple burnup zones. Therefore, SWAT4.0 has been developed by adding, to SWAT3.1, a function to utilize the deterministic code SARC2006, which has shorter calculation time, as an outer module of neutron transportation solver for burnup calculation. SWAT4.0 has been enabled to execute two-dimensional burnup calculation by providing an input data template of SRAC2006 to SWAT4.0 input data, and updating atomic number densities of burnup zones in each burnup step. This report describes outline, input data instruction, and examples of calculations of SWAT4.0. (author)

  12. Development and improvement of safety analysis code for geological disposal

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-08-15

    In order to confirm the long-term safety concerning geological disposal, probabilistic safety assessment code and other analysis codes, which can evaluate possibility of each event and influence on engineered barrier and natural barrier by the event, were introduced. We confirmed basic functions of those codes and studied the relation between those functions and FEP/PID which should be taken into consideration in safety assessment. We are planning to develop 'Nuclide Migration Assessment System' for the purpose of realizing improvement in efficiency of assessment work, human error prevention for analysis, and quality assurance of the analysis environment and analysis work for safety assessment by using it. As the first step, we defined the system requirements and decided the system composition and functions which should be mounted in them based on those requirements. (author)

  13. Neural Elements for Predictive Coding

    Directory of Open Access Journals (Sweden)

    Stewart SHIPP

    2016-11-01

    Full Text Available Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backwards in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many ‘illusory’ instances of perception where what is seen (heard, etc is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forwards and backwards pathways should be completely separate, given their functional distinction; this aspect of circuitry – that neurons with extrinsically bifurcating axons do not project in both directions – has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy formulation of predictive coding is combined with the classic ‘canonical microcircuit’ and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a updates in the microcircuitry of primate visual cortex, and (b rapid technical advances made

  14. Neural Elements for Predictive Coding.

    Science.gov (United States)

    Shipp, Stewart

    2016-01-01

    Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backward in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many 'illusory' instances of perception where what is seen (heard, etc.) is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forward and backward pathways should be completely separate, given their functional distinction; this aspect of circuitry - that neurons with extrinsically bifurcating axons do not project in both directions - has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy) formulation of predictive coding is combined with the classic 'canonical microcircuit' and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a) updates in the microcircuitry of primate visual cortex, and (b) rapid technical advances made possible by transgenic neural

  15. Graphical user interface development for the MARS code

    International Nuclear Information System (INIS)

    Jeong, J.-J.; Hwang, M.; Lee, Y.J.; Kim, K.D.; Chung, B.D.

    2003-01-01

    KAERI has developed the best-estimate thermal-hydraulic system code MARS using the RELAP5/MOD3 and COBRA-TF codes. To exploit the excellent features of the two codes, we consolidated the two codes. Then, to improve the readability, maintainability, and portability of the consolidated code, all the subroutines were completely restructured by employing a modular data structure. At present, a major part of the MARS code development program is underway to improve the existing capabilities. The code couplings with three-dimensional neutron kinetics, containment analysis, and transient critical heat flux calculations have also been carried out. At the same time, graphical user interface (GUI) tools have been developed for user friendliness. This paper presents the main features of the MARS GUI. The primary objective of the GUI development was to provide a valuable aid for all levels of MARS users in their output interpretation and interactive controls. Especially, an interactive control function was designed to allow operator actions during simulation so that users can utilize the MARS code like conventional nuclear plant analyzers (NPAs). (author)

  16. Light water reactor fuel analysis code FEMAXI-7; model and structure

    International Nuclear Information System (INIS)

    Suzuki, Motoe; Udagawa, Yutaka; Saitou, Hiroaki

    2011-03-01

    A light water reactor fuel analysis code FEMAXI-7 has been developed for the purpose of analyzing the fuel behavior in both normal conditions and anticipated transient conditions. This code is an advanced version which has been produced by incorporating the former version FEMAXI-6 with numerous functional improvements and extensions. In FEMAXI-7, many new models have been added and parameters have been clearly arranged. Also, to facilitate effective maintenance and accessibility of the code, modularization of subroutines and functions have been attained, and quality comment descriptions of variables or physical quantities have been incorporated in the source code. With these advancements, the FEMAXI-7 code has been upgraded to a versatile analytical tool for high burnup fuel behavior analyses. This report describes in detail the design, basic theory and structure, models and numerical method, and improvements and extensions. (author)

  17. Network Coded Software Defined Networking

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Hansen, Jonas; Roetter, Daniel Enrique Lucani

    2015-01-01

    Software Defined Networking (SDN) and Network Coding (NC) are two key concepts in networking that have garnered a large attention in recent years. On the one hand, SDN's potential to virtualize services in the Internet allows a large flexibility not only for routing data, but also to manage....... This paper advocates for the use of SDN to bring about future Internet and 5G network services by incorporating network coding (NC) functionalities. The inherent flexibility of both SDN and NC provides a fertile ground to envision more efficient, robust, and secure networking designs, that may also...

  18. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  19. Bit-Wise Arithmetic Coding For Compression Of Data

    Science.gov (United States)

    Kiely, Aaron

    1996-01-01

    Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.

  20. A GPU code for analytic continuation through a sampling method

    Directory of Open Access Journals (Sweden)

    Johan Nordström

    2016-01-01

    Full Text Available We here present a code for performing analytic continuation of fermionic Green’s functions and self-energies as well as bosonic susceptibilities on a graphics processing unit (GPU. The code is based on the sampling method introduced by Mishchenko et al. (2000, and is written for the widely used CUDA platform from NVidia. Detailed scaling tests are presented, for two different GPUs, in order to highlight the advantages of this code with respect to standard CPU computations. Finally, as an example of possible applications, we provide the analytic continuation of model Gaussian functions, as well as more realistic test cases from many-body physics.

  1. On the classification of long non-coding RNAs

    KAUST Repository

    Ma, Lina; Bajic, Vladimir B.; Zhang, Zhang

    2013-01-01

    Long non-coding RNAs (lncRNAs) have been found to perform various functions in a wide variety of important biological processes. To make easier interpretation of lncRNA functionality and conduct deep mining on these transcribed sequences

  2. Self-complementary circular codes in coding theory.

    Science.gov (United States)

    Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz

    2018-04-01

    Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.

  3. SRAC2006: A comprehensive neutronics calculation code system

    International Nuclear Information System (INIS)

    Okumura, Keisuke; Kugo, Teruhiko; Kaneko, Kunio; Tsuchihashi, Keichiro

    2007-02-01

    The SRAC is a code system applicable to neutronics analysis of a variety of reactor types. Since the publication of the second version of the users manual (JAERI-1302) in 1986 for the SRAC system, a number of additions and modifications to the functions and the library data have been made to establish a comprehensive neutronics code system. The current system includes major neutron data libraries (JENDL-3.3, JENDL-3.2, ENDF/B-VII, ENDF/B-VI.8, JEFF-3.1, JEF-2.2, etc.), and integrates five elementary codes for neutron transport and diffusion calculation; PIJ based on the collision probability method applicable to 16 kind of lattice models, S N transport codes ANISN(1D) and TWOTRN(2D), diffusion codes TUD(1D) and CITATION(multi-D). The system also includes an auxiliary code COREBN for multi-dimensional core burn-up calculation. (author)

  4. Diagonal Eigenvalue Unity (DEU) code for spectral amplitude coding-optical code division multiple access

    Science.gov (United States)

    Ahmed, Hassan Yousif; Nisar, K. S.

    2013-08-01

    Code with ideal in-phase cross correlation (CC) and practical code length to support high number of users are required in spectral amplitude coding-optical code division multiple access (SAC-OCDMA) systems. SAC systems are getting more attractive in the field of OCDMA because of its ability to eliminate the influence of multiple access interference (MAI) and also suppress the effect of phase induced intensity noise (PIIN). In this paper, we have proposed new Diagonal Eigenvalue Unity (DEU) code families with ideal in-phase CC based on Jordan block matrix with simple algebraic ways. Four sets of DEU code families based on the code weight W and number of users N for the combination (even, even), (even, odd), (odd, odd) and (odd, even) are constructed. This combination gives DEU code more flexibility in selection of code weight and number of users. These features made this code a compelling candidate for future optical communication systems. Numerical results show that the proposed DEU system outperforms reported codes. In addition, simulation results taken from a commercial optical systems simulator, Virtual Photonic Instrument (VPI™) shown that, using point to multipoint transmission in passive optical network (PON), DEU has better performance and could support long span with high data rate.

  5. The NIMROD Code

    Science.gov (United States)

    Schnack, D. D.; Glasser, A. H.

    1996-11-01

    NIMROD is a new code system that is being developed for the analysis of modern fusion experiments. It is being designed from the beginning to make the maximum use of massively parallel computer architectures and computer graphics. The NIMROD physics kernel solves the three-dimensional, time-dependent two-fluid equations with neo-classical effects in toroidal geometry of arbitrary poloidal cross section. The NIMROD system also includes a pre-processor, a grid generator, and a post processor. User interaction with NIMROD is facilitated by a modern graphical user interface (GUI). The NIMROD project is using Quality Function Deployment (QFD) team management techniques to minimize re-engineering and reduce code development time. This paper gives an overview of the NIMROD project. Operation of the GUI is demonstrated, and the first results from the physics kernel are given.

  6. On Coding Non-Contiguous Letter Combinations

    Directory of Open Access Journals (Sweden)

    Frédéric eDandurand

    2011-06-01

    Full Text Available Starting from the hypothesis that printed word identification initially involves the parallel mapping of visual features onto location-specific letter identities, we analyze the type of information that would be involved in optimally mapping this location-specific orthographic code onto a location-invariant lexical code. We assume that some intermediate level of coding exists between individual letters and whole words, and that this involves the representation of letter combinations. We then investigate the nature of this intermediate level of coding given the constraints of optimality. This intermediate level of coding is expected to compress data while retaining as much information as possible about word identity. Information conveyed by letters is a function of how much they constrain word identity and how visible they are. Optimization of this coding is a combination of minimizing resources (using the most compact representations and maximizing information. We show that in a large proportion of cases, non-contiguous letter sequences contain more information than contiguous sequences, while at the same time requiring less precise coding. Moreover, we found that the best predictor of human performance in orthographic priming experiments was within-word ranking of conditional probabilities, rather than average conditional probabilities. We conclude that from an optimality perspective, readers learn to select certain contiguous and non-contiguous letter combinations as information that provides the best cue to word identity.

  7. The arbitrary order design code Tlie 1.0

    International Nuclear Information System (INIS)

    Zeijts, J. van; Neri, Filippo

    1993-01-01

    We describe the arbitrary order charged particle transfer map code TLIE. This code is a general 6D relativistic design code with a MAD compatible input language and among others implements user defined functions and subroutines and nested fitting and optimization. First we describe the mathematics and physics in the code. Aside from generating maps for all the standard accelerator elements we describe an efficient method for generating nonlinear transfer maps for realistic magnet models. We have implemented the method to arbitrary order in our accelerator design code for cylindrical current sheet magnets. We also have implemented a self-consistent space-charge approach as in CHARLIE. Subsequently we give a description of the input language and finally, we give several examples from productions run, such as cases with stacked multipoles with overlapping fringe fields. (Author)

  8. List Decoding of Matrix-Product Codes from nested codes: an application to Quasi-Cyclic codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Høholdt, Tom; Ruano, Diego

    2012-01-01

    A list decoding algorithm for matrix-product codes is provided when $C_1,..., C_s$ are nested linear codes and $A$ is a non-singular by columns matrix. We estimate the probability of getting more than one codeword as output when the constituent codes are Reed-Solomon codes. We extend this list...... decoding algorithm for matrix-product codes with polynomial units, which are quasi-cyclic codes. Furthermore, it allows us to consider unique decoding for matrix-product codes with polynomial units....

  9. Defense Inventory: Services Generally Have Reduced Excess Inventory, but Additional Actions Are Needed

    Science.gov (United States)

    2015-04-01

    MICAP Mission Impaired Capability Awaiting Parts OSD Office of the Secretary of Defense S & OP Sales and Operations Planning This is a work of...January 2013, the Army began to implement a Sales and Operations Planning ( S & OP ) process to improve its supply chain and inventory management...According to the Army Materiel Command officials, the Army’s decision to implement S & OP was recommended by an Integrated Project Team that concluded the

  10. Development of probabilistic fracture mechanics code PASCAL and user's manual

    Energy Technology Data Exchange (ETDEWEB)

    Shibata, Katsuyuki; Onizawa, Kunio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Li, Yinsheng; Kato, Daisuke [Fuji Research Institute Corporation, Tokyo (Japan)

    2001-03-01

    As a part of the aging and structural integrity research for LWR components, a new PFM (Probabilistic Fracture Mechanics) code PASCAL (PFM Analysis of Structural Components in Aging LWR) has been developed since FY1996. This code evaluates the failure probability of an aged reactor pressure vessel subjected to transient loading such as PTS (Pressurized Thermal Shock). The development of the code has been aimed to improve the accuracy and reliability of analysis by introducing new analysis methodologies and algorithms considering the recent development in the fracture mechanics methodologies and computer performance. The code has some new functions in optimized sampling and cell dividing procedure in stratified Monte Carlo simulation, elastic-plastic fracture criterion of R6 method, extension analysis models in semi-elliptical crack, evaluation of effect of thermal annealing and etc. In addition, an input data generator of temperature and stress distribution time histories was also prepared in the code. Functions and performance of the code have been confirmed based on the verification analyses and some case studies on the influence parameters. The present phase of the development will be completed in FY2000. Thus this report provides the user's manual and theoretical background of the code. (author)

  11. Module description of TOKAMAK equilibrium code MEUDAS

    International Nuclear Information System (INIS)

    Suzuki, Masaei; Hayashi, Nobuhiko; Matsumoto, Taro; Ozeki, Takahisa

    2002-01-01

    The analysis of an axisymmetric MHD equilibrium serves as a foundation of TOKAMAK researches, such as a design of devices and theoretical research, the analysis of experiment result. For this reason, also in JAERI, an efficient MHD analysis code has been developed from start of TOKAMAK research. The free boundary equilibrium code ''MEUDAS'' which uses both the DCR method (Double-Cyclic-Reduction Method) and a Green's function can specify the pressure and the current distribution arbitrarily, and has been applied to the analysis of a broad physical subject as a code having rapidity and high precision. Also the MHD convergence calculation technique in ''MEUDAS'' has been built into various newly developed codes. This report explains in detail each module in ''MEUDAS'' for performing convergence calculation in solving the MHD equilibrium. (author)

  12. Linear-Time Non-Malleable Codes in the Bit-Wise Independent Tampering Model

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Döttling, Nico

    Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventuall...... non-malleable codes of Agrawal et al. (TCC 2015) and of Cher- aghchi and Guruswami (TCC 2014) and improves the previous result in the bit-wise tampering model: it builds the first non-malleable codes with linear-time complexity and optimal-rate (i.e. rate 1 - o(1)).......Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventually...... abort) completely unrelated with m. It is known that non-malleability is possible only for restricted classes of tampering functions. Since their introduction, a long line of works has established feasibility results of non-malleable codes against different families of tampering functions. However...

  13. Towards Effective Intra-flow Network Coding in Software Defined Wireless Mesh Networks

    Directory of Open Access Journals (Sweden)

    Donghai Zhu

    2016-01-01

    Full Text Available Wireless Mesh Networks (WMNs have potential to provide convenient broadband wireless Internet access to mobile users.With the support of Software-Defined Networking (SDN paradigm that separates control plane and data plane, WMNs can be easily deployed and managed. In addition, by exploiting the broadcast nature of the wireless medium and the spatial diversity of multi-hop wireless networks, intra-flow network coding has shown a greater benefit in comparison with traditional routing paradigms in data transmission for WMNs. In this paper, we develop a novel OpenCoding protocol, which combines the SDN technique with intra-flow network coding for WMNs. Our developed protocol can simplify the deployment and management of the network and improve network performance. In OpenCoding, a controller that works on the control plane makes routing decisions for mesh routers and the hop-by-hop forwarding function is replaced by network coding functions in data plane. We analyze the overhead of OpenCoding. Through a simulation study, we show the effectiveness of the OpenCoding protocol in comparison with existing schemes. Our data shows that OpenCoding outperforms both traditional routing and intra-flow network coding schemes.

  14. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  15. Monte-Carlo code calculation of 3D reactor core model with usage of burnt fuel isotopic compositions, obtained by engineering codes

    Energy Technology Data Exchange (ETDEWEB)

    Aleshin, Sergey S.; Gorodkov, Sergey S.; Shcherenko, Anna I. [National Research Centre ' Kurchatov Institute' , Moscow (Russian Federation)

    2016-09-15

    A burn-up calculation of large systems by Monte-Carlo code (MCU) is complex process and it requires large computational costs. Previously prepared isotopic compositions are proposed to be used for the Monte-Carlo code calculations of different system states with burnt fuel. Isotopic compositions are calculated by an approximation method. The approximation method is based on usage of a spectral functionality and reference isotopic compositions, that are calculated by the engineering codes (TVS-M, BIPR-7A and PERMAK-A). The multiplication factors and power distributions of FAs from a 3-D reactor core are calculated in this work by the Monte-Carlo code MCU using earlier prepared isotopic compositions. The separate conditions of the burnt core are observed. The results of MCU calculations were compared with those that were obtained by engineering codes.

  16. Coincidence: Fortran code for calculation of (e, e'x) differential cross-sections, nuclear structure functions and polarization asymmetry in self-consistent random phase approximation with Skyrme interaction

    Energy Technology Data Exchange (ETDEWEB)

    Cavinato, M.; Marangoni, M.; Saruis, A.M.

    1990-10-01

    This report describes the COINCIDENCE code written for the IBM 3090/300E computer in Fortran 77 language. The output data of this code are the (e, e'x) threefold differential cross-sections, the nuclear structure functions, the polarization asymmetry and the angular correlation coefficients. In the real photon limit, the output data are the angular distributions for plane polarized incident photons. The code reads from tape the transition matrix elements previously calculated, by in continuum self-consistent RPA (random phase approximation) theory with Skyrme interactions. This code has been used to perform a numerical analysis of coincidence (e, e'x) reactions with polarized electrons on the /sup 16/O nucleous.

  17. Code-switching among chiShona-English bilinguals in courtroom ...

    African Journals Online (AJOL)

    As has become the norm in bilingual situations, code-switching in both formal and informal contexts has increased recognition as a verbal mode of communication. This article presents a parsimonious exegesis of the patterns and functions of code-switching in the courtroom discourse of chiShona-English bilinguals.

  18. Combinatorial neural codes from a mathematical coding theory perspective.

    Science.gov (United States)

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.

  19. Graphic user interface for COSMOS code

    International Nuclear Information System (INIS)

    Oh, Je Yong; Koo, Yang Hyun; Lee, Byung Ho; Cheon, Jin Sik; Sohn, Dong Seong

    2003-06-01

    The Graphic User Interface (GUI) - which consisted of graphical elements such as windows, menu, button, icon, and so on - made it possible that the computer could be easily used for common users. Hence, the GUI was introduced to improve the efficiency to input parameters in COSMOS code. The functions to output graphs on the screen and postscript files were also added. And the graph library can be applied to the other codes. The details of principles of GUI and graphic library were described in the report

  20. Synaptic E-I Balance Underlies Efficient Neural Coding.

    Science.gov (United States)

    Zhou, Shanglin; Yu, Yuguo

    2018-01-01

    Both theoretical and experimental evidence indicate that synaptic excitation and inhibition in the cerebral cortex are well-balanced during the resting state and sensory processing. Here, we briefly summarize the evidence for how neural circuits are adjusted to achieve this balance. Then, we discuss how such excitatory and inhibitory balance shapes stimulus representation and information propagation, two basic functions of neural coding. We also point out the benefit of adopting such a balance during neural coding. We conclude that excitatory and inhibitory balance may be a fundamental mechanism underlying efficient coding.

  1. Direct G-code manipulation for 3D material weaving

    Science.gov (United States)

    Koda, S.; Tanaka, H.

    2017-04-01

    The process of conventional 3D printing begins by first build a 3D model, then convert to the model to G-code via a slicer software, feed the G-code to the printer, and finally start the printing. The most simple and popular 3D printing technique is Fused Deposition Modeling. However, in this method, the printing path that the printer head can take is restricted by the G-code. Therefore the printed 3D models with complex pattern have structural errors like holes or gaps between the printed material lines. In addition, the structural density and the material's position of the printed model are difficult to control. We realized the G-code editing, Fabrix, for making a more precise and functional printed model with both single and multiple material. The models with different stiffness are fabricated by the controlling the printing density of the filament materials with our method. In addition, the multi-material 3D printing has a possibility to expand the physical properties by the material combination and its G-code editing. These results show the new printing method to provide more creative and functional 3D printing techniques.

  2. Implementation of the kinetics in the transport code AZTRAN

    International Nuclear Information System (INIS)

    Duran G, J. A.; Del Valle G, E.; Gomez T, A. M.

    2017-09-01

    This paper shows the implementation of the time dependence in the three-dimensional transport code AZTRAN (AZtlan TRANsport), which belongs to the AZTLAN platform, for the analysis of nuclear reactors (currently under development). The AZTRAN code with this implementation is able to numerically solve the time-dependent transport equation in XYZ geometry, for several energy groups, using the discrete ordinate method S n for the discretization of the angular variable, the nodal method RTN-0 for spatial discretization and method 0 for discretization in time. Initially, the code only solved the neutrons transport equation in steady state, so the implementation of the temporal part was made integrating the neutrons transport equation with respect to time and balance equations corresponding to the concentrations of delayed neutron precursors, for which method 0 was applied. After having directly implemented code kinetics, the improved quasi-static method was implemented, which is a tool for reducing computation time, where the angular flow is factored by the product of two functions called shape function and amplitude function, where the first is calculated for long time steps, called macro-steps and the second is resolved for small time steps called micro-steps. In the new version of AZTRAN several Benchmark problems that were taken from the literature were simulated, the problems used are of two and three dimensions which allowed corroborating the accuracy and stability of the code, showing in general in the reference tests a good behavior. (Author)

  3. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  4. Light water reactor fuel analysis code FEMAXI-7. Model and structure

    International Nuclear Information System (INIS)

    Suzuki, Motoe; Udagawa, Yutaka; Nagase, Fumihisa; Saitou, Hiroaki

    2013-07-01

    A light water reactor fuel analysis code FEMAXI-7 has been developed for the purpose of analyzing the fuel behavior in both normal conditions and anticipated transient conditions. This code is an advanced version which has been produced by incorporating the former version FEMAXI-6 with numerous functional improvements and extensions. In FEMAXI-7, many new models have been added and parameters have been clearly arranged. Also, to facilitate effective maintenance and accessibility of the code, modularization of subroutines and functions have been attained, and quality comment descriptions of variables or physical quantities have been incorporated in the source code. With these advancements, the FEMAXI-7 code has been upgraded to a versatile analytical tool for high burnup fuel behavior analyses. This report describes in detail the design, basic theory and structure, models and numerical method of FEMAXI-7, and its improvements and extensions. (author)

  5. Detecting Malicious Code by Binary File Checking

    Directory of Open Access Journals (Sweden)

    Marius POPA

    2014-01-01

    Full Text Available The object, library and executable code is stored in binary files. Functionality of a binary file is altered when its content or program source code is changed, causing undesired effects. A direct content change is possible when the intruder knows the structural information of the binary file. The paper describes the structural properties of the binary object files, how the content can be controlled by a possible intruder and what the ways to identify malicious code in such kind of files. Because the object files are inputs in linking processes, early detection of the malicious content is crucial to avoid infection of the binary executable files.

  6. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  7. ACT-XN: Revised version of an activation calculation code for fusion reactor analysis. Supplement of the function for the sequential reaction activation by charged particles

    International Nuclear Information System (INIS)

    Yamauchi, Michinori; Sato, Satoshi; Nishitani, Takeo; Konno, Chikara; Hori, Jun-ichi; Kawasaki, Hiromitsu

    2007-09-01

    The ACT-XN is a revised version of the ACT4 code, which was developed in the Japan Atomic Energy Research Institute (JAERI) to calculate the transmutation, induced activity, decay heat, delayed gamma-ray source etc. for fusion devices. The ACT4 code cannot deal with the sequential reactions of charged particles generated by primary neutron reactions. In the design of present experimental reactors, the activation due to sequential reactions may not be of great concern as it is usually buried under the activity by primary neutron reactions. However, low activation material is one of the important factors for constructing high power fusion reactors in future, and unexpected activation may be produced through sequential reactions. Therefore, in the present work, the ACT4 code was newly supplemented with the calculation functions for the sequential reactions and renamed the ACT-XN. The ACT-XN code is equipped with functions to calculate effective cross sections for sequential reactions and input them in transmutation matrix. The FISPACT data were adopted for (x,n) reaction cross sections, charged particles emission spectra and stopping powers. The nuclear reaction chain data library were revised to cope with the (x,n) reactions. The charged particles are specified as p, d, t, 3 He(h) and α. The code was applied to the analysis of FNS experiment for LiF and Demo-reactor design with FLiBe, and confirmed that it reproduce the experimental values within 15-30% discrepancies. In addition, a notice was presented that the dose rate due to sequential reaction cannot always be neglected after a certain period cooling for some of the low activation material. (author)

  8. Aminotryptophan-containing barstar: structure--function tradeoff in protein design and engineering with an expanded genetic code.

    Science.gov (United States)

    Rubini, Marina; Lepthien, Sandra; Golbik, Ralph; Budisa, Nediljko

    2006-07-01

    The indole ring of the canonical amino acid tryptophan (Trp) possesses distinguished features, such as sterical bulk, hydrophobicity and the nitrogen atom which is capable of acting as a hydrogen bond donor. The introduction of an amino group into the indole moiety of Trp yields the structural analogs 4-aminotryptophan ((4-NH(2))Trp) and 5-aminotryptophan ((5-NH(2))Trp). Their hydrophobicity and spectral properties are substantially different when compared to those of Trp. They resemble the purine bases of DNA and share their capacity for pH-sensitive intramolecular charge transfer. The Trp --> aminotryptophan substitution in proteins during ribosomal translation is expected to result in related protein variants that acquire these features. These expectations have been fulfilled by incorporating (4-NH(2))Trp and (5-NH(2))Trp into barstar, an intracellular inhibitor of the ribonuclease barnase from Bacillus amyloliquefaciens. The crystal structure of (4-NH(2))Trp-barstar is similar to that of the parent protein, whereas its spectral and thermodynamic behavior is found to be remarkably different. The T(m) value of (4-NH(2))Trp- and (5-NH(2))Trp-barstar is lowered by about 20 degrees Celsius, and they exhibit a strongly reduced unfolding cooperativity and substantial loss of free energy in folding. Furthermore, folding kinetic study of (4-NH(2))Trp-barstar revealed that the denatured state is even preferred over native one. The combination of structural and thermodynamic analyses clearly shows how structures of substituted barstar display a typical structure-function tradeoff: the acquirement of unique pH-sensitive charge transfer as a novel function is achieved at the expense of protein stability. These findings provide a new insight into the evolution of the amino acid repertoire of the universal genetic code and highlight possible problems regarding protein engineering and design by using an expanded genetic code.

  9. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  10. Comprehensive Identification of Long Non-coding RNAs in Purified Cell Types from the Brain Reveals Functional LncRNA in OPC Fate Determination.

    Directory of Open Access Journals (Sweden)

    Xiaomin Dong

    2015-12-01

    Full Text Available Long non-coding RNAs (lncRNAs (> 200 bp play crucial roles in transcriptional regulation during numerous biological processes. However, it is challenging to comprehensively identify lncRNAs, because they are often expressed at low levels and with more cell-type specificity than are protein-coding genes. In the present study, we performed ab initio transcriptome reconstruction using eight purified cell populations from mouse cortex and detected more than 5000 lncRNAs. Predicting the functions of lncRNAs using cell-type specific data revealed their potential functional roles in Central Nervous System (CNS development. We performed motif searches in ENCODE DNase I digital footprint data and Mouse ENCODE promoters to infer transcription factor (TF occupancy. By integrating TF binding and cell-type specific transcriptomic data, we constructed a novel framework that is useful for systematically identifying lncRNAs that are potentially essential for brain cell fate determination. Based on this integrative analysis, we identified lncRNAs that are regulated during Oligodendrocyte Precursor Cell (OPC differentiation from Neural Stem Cells (NSCs and that are likely to be involved in oligodendrogenesis. The top candidate, lnc-OPC, shows highly specific expression in OPCs and remarkable sequence conservation among placental mammals. Interestingly, lnc-OPC is significantly up-regulated in glial progenitors from experimental autoimmune encephalomyelitis (EAE mouse models compared to wild-type mice. OLIG2-binding sites in the upstream regulatory region of lnc-OPC were identified by ChIP (chromatin immunoprecipitation-Sequencing and validated by luciferase assays. Loss-of-function experiments confirmed that lnc-OPC plays a functional role in OPC genesis. Overall, our results substantiated the role of lncRNA in OPC fate determination and provided an unprecedented data source for future functional investigations in CNS cell types. We present our datasets and

  11. Production of analysis code for 'JOYO' dosimetry experiment

    International Nuclear Information System (INIS)

    Sasaki, Makoto; Nakazawa, Masaharu.

    1981-01-01

    As part of the measurement and analysis plan for the Dosimetry Experiment at the ''JOYO'' experimental fast reactor, neutron flux spectra analysis is performed using the NEUPAC (Neutron Unfolding Code Package) computer program. The code calculates the neutron flux spectra and other integral quantities from the activation data of the dosimeter foils. The NEUPAC code is based on the J1-type unfolding method, and the estimated neutron flux spectra is obtained as its solution. The program is able to determine the integral quantities and their sensitivities, together with an error estimate of the unfolded spectra and integral quantities. The code also performs a chi-square test of the input/output data, and contains many options for the calculational routines. This report presents the analytic theory, the program algorithms, and a description of the functions and use of the NEUPAC code. (author)

  12. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F9--F16 -- Volume 2, Part 2, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    West, J.T.; Hoffman, T.J.; Emmett, M.B.; Childs, K.W.; Petrie, L.M.; Landers, N.F.; Bryan, C.B.; Giles, G.E. [Oak Ridge National Lab., TN (United States)

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries. This volume discusses the following functional modules: MORSE-SGC; HEATING 7.2; KENO V.a; JUNEBUG-II; HEATPLOT-S; REGPLOT 6; PLORIGEN; and OCULAR.

  13. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F9--F16 -- Volume 2, Part 2, Revision 4

    International Nuclear Information System (INIS)

    West, J.T.; Hoffman, T.J.; Emmett, M.B.; Childs, K.W.; Petrie, L.M.; Landers, N.F.; Bryan, C.B.; Giles, G.E.

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries. This volume discusses the following functional modules: MORSE-SGC; HEATING 7.2; KENO V.a; JUNEBUG-II; HEATPLOT-S; REGPLOT 6; PLORIGEN; and OCULAR

  14. The Alba ray tracing code: ART

    Science.gov (United States)

    Nicolas, Josep; Barla, Alessandro; Juanhuix, Jordi

    2013-09-01

    The Alba ray tracing code (ART) is a suite of Matlab functions and tools for the ray tracing simulation of x-ray beamlines. The code is structured in different layers, which allow its usage as part of optimization routines as well as an easy control from a graphical user interface. Additional tools for slope error handling and for grating efficiency calculations are also included. Generic characteristics of ART include the accumulation of rays to improve statistics without memory limitations, and still providing normalized values of flux and resolution in physically meaningful units.

  15. Computer Code for Nanostructure Simulation

    Science.gov (United States)

    Filikhin, Igor; Vlahovic, Branislav

    2009-01-01

    Due to their small size, nanostructures can have stress and thermal gradients that are larger than any macroscopic analogue. These gradients can lead to specific regions that are susceptible to failure via processes such as plastic deformation by dislocation emission, chemical debonding, and interfacial alloying. A program has been developed that rigorously simulates and predicts optoelectronic properties of nanostructures of virtually any geometrical complexity and material composition. It can be used in simulations of energy level structure, wave functions, density of states of spatially configured phonon-coupled electrons, excitons in quantum dots, quantum rings, quantum ring complexes, and more. The code can be used to calculate stress distributions and thermal transport properties for a variety of nanostructures and interfaces, transport and scattering at nanoscale interfaces and surfaces under various stress states, and alloy compositional gradients. The code allows users to perform modeling of charge transport processes through quantum-dot (QD) arrays as functions of inter-dot distance, array order versus disorder, QD orientation, shape, size, and chemical composition for applications in photovoltaics and physical properties of QD-based biochemical sensors. The code can be used to study the hot exciton formation/relation dynamics in arrays of QDs of different shapes and sizes at different temperatures. It also can be used to understand the relation among the deposition parameters and inherent stresses, strain deformation, heat flow, and failure of nanostructures.

  16. Research and Design in Unified Coding Architecture for Smart Grids

    Directory of Open Access Journals (Sweden)

    Gang Han

    2013-09-01

    Full Text Available Standardized and sharing information platform is the foundation of the Smart Grids. In order to improve the dispatching center information integration of the power grids and achieve efficient data exchange, sharing and interoperability, a unified coding architecture is proposed. The architecture includes coding management layer, coding generation layer, information models layer and application system layer. Hierarchical design makes the whole coding architecture to adapt to different application environments, different interfaces, loosely coupled requirements, which can realize the integration model management function of the power grids. The life cycle and evaluation method of survival of unified coding architecture is proposed. It can ensure the stability and availability of the coding architecture. Finally, the development direction of coding technology of the Smart Grids in future is prospected.

  17. The spammed code offset method

    NARCIS (Netherlands)

    Skoric, B.; Vreede, de N.

    2013-01-01

    Helper data schemes are a security primitive used for privacy-preserving biometric databases and Physical Unclonable Functions. One of the oldest known helper data schemes is the Code Offset Method (COM). We propose an extension of the COM: the helper data is accompanied by many instances of fake

  18. The spammed code offset method

    NARCIS (Netherlands)

    Skoric, B.; Vreede, de N.

    2014-01-01

    Helper data schemes are a security primitive used for privacy-preserving biometric databases and physical unclonable functions. One of the oldest known helper data schemes is the code offset method (COM). We propose an extension of the COM: the helper data are accompanied by many instances of fake

  19. What the success of brain imaging implies about the neural code.

    Science.gov (United States)

    Guest, Olivia; Love, Bradley C

    2017-01-19

    The success of fMRI places constraints on the nature of the neural code. The fact that researchers can infer similarities between neural representations, despite fMRI's limitations, implies that certain neural coding schemes are more likely than others. For fMRI to succeed given its low temporal and spatial resolution, the neural code must be smooth at the voxel and functional level such that similar stimuli engender similar internal representations. Through proof and simulation, we determine which coding schemes are plausible given both fMRI's successes and its limitations in measuring neural activity. Deep neural network approaches, which have been forwarded as computational accounts of the ventral stream, are consistent with the success of fMRI, though functional smoothness breaks down in the later network layers. These results have implications for the nature of the neural code and ventral stream, as well as what can be successfully investigated with fMRI.

  20. Development of dynamic simulation code for fuel cycle fusion reactor

    Energy Technology Data Exchange (ETDEWEB)

    Aoki, Isao; Seki, Yasushi [Department of Fusion Engineering Research, Naka Fusion Research Establishment, Japan Atomic Energy Research Institute, Naka, Ibaraki (Japan); Sasaki, Makoto; Shintani, Kiyonori; Kim, Yeong-Chan

    1999-02-01

    A dynamic simulation code for fuel cycle of a fusion experimental reactor has been developed. The code follows the fuel inventory change with time in the plasma chamber and the fuel cycle system during 2 days pulse operation cycles. The time dependence of the fuel inventory distribution is evaluated considering the fuel burn and exhaust in the plasma chamber, purification and supply functions. For each subsystem of the plasma chamber and the fuel cycle system, the fuel inventory equation is written based on the equation of state considering the fuel burn and the function of exhaust, purification, and supply. The processing constants of subsystem for steady states were taken from the values in the ITER Conceptual Design Activity (CDA) report. Using this code, the time dependence of the fuel supply and inventory depending on the burn state and subsystem processing functions are shown. (author)

  1. Lung volumes: measurement, clinical use, and coding.

    Science.gov (United States)

    Flesch, Judd D; Dine, C Jessica

    2012-08-01

    Measurement of lung volumes is an integral part of complete pulmonary function testing. Some lung volumes can be measured during spirometry; however, measurement of the residual volume (RV), functional residual capacity (FRC), and total lung capacity (TLC) requires special techniques. FRC is typically measured by one of three methods. Body plethysmography uses Boyle's Law to determine lung volumes, whereas inert gas dilution and nitrogen washout use dilution properties of gases. After determination of FRC, expiratory reserve volume and inspiratory vital capacity are measured, which allows the calculation of the RV and TLC. Lung volumes are commonly used for the diagnosis of restriction. In obstructive lung disease, they are used to assess for hyperinflation. Changes in lung volumes can also be seen in a number of other clinical conditions. Reimbursement for measurement of lung volumes requires knowledge of current procedural terminology (CPT) codes, relevant indications, and an appropriate level of physician supervision. Because of recent efforts to eliminate payment inefficiencies, the 10 previous CPT codes for lung volumes, airway resistance, and diffusing capacity have been bundled into four new CPT codes.

  2. Design LDPC Codes without Cycles of Length 4 and 6

    Directory of Open Access Journals (Sweden)

    Kiseon Kim

    2008-04-01

    Full Text Available We present an approach for constructing LDPC codes without cycles of length 4 and 6. Firstly, we design 3 submatrices with different shifting functions given by the proposed schemes, then combine them into the matrix specified by the proposed approach, and, finally, expand the matrix into a desired parity-check matrix using identity matrices and cyclic shift matrices of the identity matrices. The simulation result in AWGN channel verifies that the BER of the proposed code is close to those of Mackay's random codes and Tanner's QC codes, and the good BER performance of the proposed can remain at high code rates.

  3. New quantum codes constructed from quaternary BCH codes

    Science.gov (United States)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  4. Code system for fast reactor neutronics analysis

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Abe, Junji; Sato, Wakaei.

    1983-04-01

    A code system for analysis of fast reactor neutronics has been developed for the purpose of handy use and error reduction. The JOINT code produces the input data file to be used in the neutronics calculation code and also prepares the cross section library file with an assigned format. The effective cross sections are saved in the PDS file with an unified format. At the present stage, this code system includes the following codes; SLAROM, ESELEM5, EXPANDA-G for the production of effective cross sections and CITATION-FBR, ANISN-JR, TWOTRAN2, PHENIX, 3DB, MORSE, CIPER and SNPERT. In the course of the development, some utility programs and service programs have been additionaly developed. These are used for access of PDS file, edit of the cross sections and graphic display. Included in this report are a description of input data format of the JOINT and other programs, and of the function of each subroutine and utility programs. The usage of PDS file is also explained. In Appendix A, the input formats are described for the revised version of the CIPER code. (author)

  5. Entanglement-assisted quantum MDS codes from negacyclic codes

    Science.gov (United States)

    Lu, Liangdong; Li, Ruihu; Guo, Luobin; Ma, Yuena; Liu, Yang

    2018-03-01

    The entanglement-assisted formalism generalizes the standard stabilizer formalism, which can transform arbitrary classical linear codes into entanglement-assisted quantum error-correcting codes (EAQECCs) by using pre-shared entanglement between the sender and the receiver. In this work, we construct six classes of q-ary entanglement-assisted quantum MDS (EAQMDS) codes based on classical negacyclic MDS codes by exploiting two or more pre-shared maximally entangled states. We show that two of these six classes q-ary EAQMDS have minimum distance more larger than q+1. Most of these q-ary EAQMDS codes are new in the sense that their parameters are not covered by the codes available in the literature.

  6. Extension of the code COCOSYS to a dispersion code for smoke and carbon monoxide

    International Nuclear Information System (INIS)

    Sdouz, Gert; Mayrhofer, Robert

    2009-01-01

    The code COCOSYS (Containment Code SYStem) was developed by GRS in Germany to simulate processes and nuclear plant states during severe accidents in the containments of light water reactors. It contains several physical models, especially a module for aerosol behaviour. The goal of this work was to extend COCOSYS for applications in more general geometries mainly for complex public buildings. For the application in public buildings models for air condition systems and different boundary conditions according to different environments were developed. The principal application of the extended code COCOSYS is in the area of emergency situations especially in the simulation for carbon monoxide and smoke dispersion. After developing and implementing the new models several test calculations were performed to evaluate the functionality of the extended code. The comparison of the results with those of the original COCOSYS code showed no discrepancies. For the first realistic application several fire emergency scenarios in the Vienna General Hospital (AKH) were selected in agreement with the fire department of the hospital. One of the scenarios addresses the danger of carbon monoxide (CO) and smoke leaking into a fire protection section through a damaged fire protection flap. As a result of the dispersion simulation the CO-concentration in all of the rooms is obtained. Together with additional results as deposition and smoke dispersion the outcome of the simulation can be used for training. Among the next steps are the validation of the new models and the selection of critical scenarios. (author)

  7. Visualizing code and coverage changes for code review

    NARCIS (Netherlands)

    Oosterwaal, Sebastiaan; van Deursen, A.; De Souza Coelho, R.; Sawant, A.A.; Bacchelli, A.

    2016-01-01

    One of the tasks of reviewers is to verify that code modifications are well tested. However, current tools offer little support in understanding precisely how changes to the code relate to changes to the tests. In particular, it is hard to see whether (modified) test code covers the changed code.

  8. Homological stabilizer codes

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  9. On the construction of capacity-achieving lattice Gaussian codes

    KAUST Repository

    Alghamdi, Wael Mohammed Abdullah

    2016-08-15

    In this paper, we propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3]. © 2016 IEEE.

  10. On the construction of capacity-achieving lattice Gaussian codes

    KAUST Repository

    Alghamdi, Wael; Abediseid, Walid; Alouini, Mohamed-Slim

    2016-01-01

    In this paper, we propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3]. © 2016 IEEE.

  11. COAST code conversion from Cyber to HP

    International Nuclear Information System (INIS)

    Lee, Hae Cho

    1996-04-01

    The transient thermal hydraulic behavior of reactor coolant system in a nuclear power plant following loss of coolant flow is analyzed by use of COAST digital computer code. COAST calculates individual loop flow rates and steam generator pressure drops is a function of time following coast-down of any number of reactor coolant pumps. This report firstly describes detailed work carried out for installation of COAST on HP 9000/700 series and code validation results after installation. Secondly, a series of work is also describes in relation to installation of COAST on Apollo DN10000 series as well as relevant code validation results. Attached is a report on software verification and validation results. 7 refs. (Author) .new

  12. ELISE, a code for intensity dependent effects

    International Nuclear Information System (INIS)

    Barton, M.Q.

    1991-01-01

    The Electron ring Limits on Intensity, Stability, and Emittance (ELISE) code described in this paper computes many of the intensity dependent effects of interest to the builder of a small electron storage ring. ELISE is a program, developed largely for the author's own use, which duplicates many of the functions provided by the more general program ZAP developed by the Berkeley group. The motivation for the code was to provide an interactive system for quick answers that could be used during accelerator commissioning. A lattice program, IDA, developed earlier by the author while at Brookhaven National Laboratory, provides a good model of the type of user friendly interaction that would be desirable in such a code

  13. SPQR: a Monte Carlo reactor kinetics code

    International Nuclear Information System (INIS)

    Cramer, S.N.; Dodds, H.L.

    1980-02-01

    The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations

  14. SPECTRAL AMPLITUDE CODING OCDMA SYSTEMS USING ENHANCED DOUBLE WEIGHT CODE

    Directory of Open Access Journals (Sweden)

    F.N. HASOON

    2006-12-01

    Full Text Available A new code structure for spectral amplitude coding optical code division multiple access systems based on double weight (DW code families is proposed. The DW has a fixed weight of two. Enhanced double-weight (EDW code is another variation of a DW code family that can has a variable weight greater than one. The EDW code possesses ideal cross-correlation properties and exists for every natural number n. A much better performance can be provided by using the EDW code compared to the existing code such as Hadamard and Modified Frequency-Hopping (MFH codes. It has been observed that theoretical analysis and simulation for EDW is much better performance compared to Hadamard and Modified Frequency-Hopping (MFH codes.

  15. Coding for effective denial management.

    Science.gov (United States)

    Miller, Jackie; Lineberry, Joe

    2004-01-01

    Nearly everyone will agree that accurate and consistent coding of diagnoses and procedures is the cornerstone for operating a compliant practice. The CPT or HCPCS procedure code tells the payor what service was performed and also (in most cases) determines the amount of payment. The ICD-9-CM diagnosis code, on the other hand, tells the payor why the service was performed. If the diagnosis code does not meet the payor's criteria for medical necessity, all payment for the service will be denied. Implementation of an effective denial management program can help "stop the bleeding." Denial management is a comprehensive process that works in two ways. First, it evaluates the cause of denials and takes steps to prevent them. Second, denial management creates specific procedures for refiling or appealing claims that are initially denied. Accurate, consistent and compliant coding is key to both of these functions. The process of proactively managing claim denials also reveals a practice's administrative strengths and weaknesses, enabling radiology business managers to streamline processes, eliminate duplicated efforts and shift a larger proportion of the staff's focus from paperwork to servicing patients--all of which are sure to enhance operations and improve practice management and office morale. Accurate coding requires a program of ongoing training and education in both CPT and ICD-9-CM coding. Radiology business managers must make education a top priority for their coding staff. Front office staff, technologists and radiologists should also be familiar with the types of information needed for accurate coding. A good staff training program will also cover the proper use of Advance Beneficiary Notices (ABNs). Registration and coding staff should understand how to determine whether the patient's clinical history meets criteria for Medicare coverage, and how to administer an ABN if the exam is likely to be denied. Staff should also understand the restrictions on use of

  16. Development of EASYQAD version β: A Visualization Code System for QAD-CGGP-A Gamma and Neutron Shielding Calculation Code

    International Nuclear Information System (INIS)

    Kim, Jae Cheon; Lee, Hwan Soo; Ha, Pham Nhu Viet; Kim, Soon Young; Shin, Chang Ho; Kim, Jong Kyung

    2007-01-01

    EASYQAD had been previously developed by using MATLAB GUI (Graphical User Interface) in order to perform conveniently gamma and neutron shielding calculations at Hanyang University. It had been completed as version α of radiation shielding analysis code. In this study, EASYQAD was upgraded to version β with many additional functions and more user-friendly graphical interfaces. For general users to run it on Windows XP environment without any MATLAB installation, this version was developed into a standalone code system

  17. Summary description of the scale modular code system

    International Nuclear Information System (INIS)

    Parks, C.V.

    1987-12-01

    SCALE - a modular code system for Standardized Computer Analyses for Licensing Evaluation - has been developed at Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission staff. The SCALE system utilizes well-established computer codes and methods within standard analytic sequences that allow simplified free-form input, automate the data processing and coupling between codes, and provide accurate and reliable results. System development has been directed at criticality safety, shielding, and heat transfer analysis of spent fuel transport and/or storage casks. However, only a few of the sequences (and none of the individual functional modules) are restricted to cask applications. This report will provide a background on the history of the SCALE development and review the components and their function within the system. The available data libraries are also discussed, together with the automated features that standardize the data processing and systems analysis. 83 refs., 32 figs., 11 tabs

  18. Accelerator-driven transmutation reactor analysis code system (ATRAS)

    Energy Technology Data Exchange (ETDEWEB)

    Sasa, Toshinobu; Tsujimoto, Kazufumi; Takizuka, Takakazu; Takano, Hideki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1999-03-01

    JAERI is proceeding a design study of the hybrid type minor actinide transmutation system which mainly consist of an intense proton accelerator and a fast subcritical core. Neutronics and burnup characteristics of the accelerator-driven system is important from a view point of the maintenance of subcriticality and energy balance during the system operation. To determine those characteristics accurately, it is necessary to involve reactions at high-energy region, which are not treated on ordinary reactor analysis codes. The authors developed a code system named ATRAS to analyze the neutronics and burnup characteristics of accelerator-driven subcritical reactor systems. ATRAS has a function of burnup analysis taking account of the effect of spallation neutron source. ATRAS consists of a spallation analysis code, a neutron transport codes and a burnup analysis code. Utility programs for fuel exchange, pre-processing and post-processing are also incorporated. (author)

  19. A guide to the use of SUPERB code

    International Nuclear Information System (INIS)

    Jagannathan, V.; Jain, R.P.

    1983-01-01

    The SUPERB code has been developed for the neutronics design of a BWR fuel assembly. The code SUPERB provides the few group homogenised lattice parameters of the fuel box as a function of burnup for different voids, control and temperatures of fuel and moderators. These nuclear data form the basic input to subsequent steady state or transient core analyses. This report describes the modelling of a BWR fuel box with almost all the complexities like the poisoned pins and control blade. This illustration and a sample input included here should provide a first-hand acquaintance with the code SUPERB and its use. It is hoped that this report facilitates the use of the code SUPERB by a variety of users, the constructive feedback of whom is invaluable in not only improving the versatility but also removing any hitherto hidden infelicities of the code. (author)

  20. A note on the minimum Lee distance of certain self-dual modular codes

    NARCIS (Netherlands)

    Asch, van A.G.; Martens, F.J.L.

    2012-01-01

    In a former paper we investigated the connection between p -ary linear codes, p prime, and theta functions. Corresponding to a given code a suitable lattice and its associated theta function were defined. Using results from the theory of modular forms we got an algorithm to determine an upper bound

  1. Continuous Non-malleable Codes

    DEFF Research Database (Denmark)

    Faust, Sebastian; Mukherjee, Pratyay; Nielsen, Jesper Buus

    2014-01-01

    or modify it to the encoding of a completely unrelated value. This paper introduces an extension of the standard non-malleability security notion - so-called continuous non-malleability - where we allow the adversary to tamper continuously with an encoding. This is in contrast to the standard notion of non...... is necessary to achieve continuous non-malleability in the split-state model. Moreover, we illustrate that none of the existing constructions satisfies our uniqueness property and hence is not secure in the continuous setting. We construct a split-state code satisfying continuous non-malleability. Our scheme...... is based on the inner product function, collision-resistant hashing and non-interactive zero-knowledge proofs of knowledge and requires an untamperable common reference string. We apply continuous non-malleable codes to protect arbitrary cryptographic primitives against tampering attacks. Previous...

  2. Mentor Texts and the Coding of Academic Writing Structures: A Functional Approach

    Directory of Open Access Journals (Sweden)

    Wilder Yesid Escobar Alméciga

    2014-10-01

    Full Text Available The purpose of the present pedagogical experience was to address the English language writing needs of university-level students pursuing a degree in bilingual education with an emphasis in the teaching of English. Using mentor texts and coding academic writing structures, an instructional design was developed to directly address the shortcomings presented through a triangulated needs analysis. Through promoting awareness of international standards of writing as well as fostering an understanding of the inherent structures of academic texts, a methodology intended to increase academic writing proficiency was explored. The study suggests that mentor texts and the coding of academic writing structures can have a positive impact on the production of students’ academic writing.

  3. Module description of TOKAMAK equilibrium code MEUDAS

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Masaei; Hayashi, Nobuhiko; Matsumoto, Taro; Ozeki, Takahisa [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment

    2002-01-01

    The analysis of an axisymmetric MHD equilibrium serves as a foundation of TOKAMAK researches, such as a design of devices and theoretical research, the analysis of experiment result. For this reason, also in JAERI, an efficient MHD analysis code has been developed from start of TOKAMAK research. The free boundary equilibrium code ''MEUDAS'' which uses both the DCR method (Double-Cyclic-Reduction Method) and a Green's function can specify the pressure and the current distribution arbitrarily, and has been applied to the analysis of a broad physical subject as a code having rapidity and high precision. Also the MHD convergence calculation technique in ''MEUDAS'' has been built into various newly developed codes. This report explains in detail each module in ''MEUDAS'' for performing convergence calculation in solving the MHD equilibrium. (author)

  4. ESP-TIMOC code manual

    International Nuclear Information System (INIS)

    Jaarsma, R.; Perlado, J.M.; Rief, H.

    1978-01-01

    ESP-TIMOC is an 'Event Scanning Program' to analyse the events (collision or boundary crossing parameters) of Monte Carlo particle transport problems. It is a modular program and belongs to the TIMOC code system. ESP-TIMOC is primarily designed to calculate the time dependent response functions such as energy dependent fluxes and currents at interfaces. An eventual extension to other quantities is simple and straight forward

  5. Energy-Efficient Channel Coding Strategy for Underwater Acoustic Networks

    Directory of Open Access Journals (Sweden)

    Grasielli Barreto

    2017-03-01

    Full Text Available Underwater acoustic networks (UAN allow for efficiently exploiting and monitoring the sub-aquatic environment. These networks are characterized by long propagation delays, error-prone channels and half-duplex communication. In this paper, we address the problem of energy-efficient communication through the use of optimized channel coding parameters. We consider a two-layer encoding scheme employing forward error correction (FEC codes and fountain codes (FC for UAN scenarios without feedback channels. We model and evaluate the energy consumption of different channel coding schemes for a K-distributed multipath channel. The parameters of the FEC encoding layer are optimized by selecting the optimal error correction capability and the code block size. The results show the best parameter choice as a function of the link distance and received signal-to-noise ratio.

  6. Development of Regulatory Audit Core Safety Code : COREDAX

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Chae Yong; Jo, Jong Chull; Roh, Byung Hwan [Korea Institute of Nuclear Safety, Taejon (Korea, Republic of); Lee, Jae Jun; Cho, Nam Zin [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    2005-07-01

    Korea Institute of Nuclear Safety (KINS) has developed a core neutronics simulator, COREDAX code, for verifying core safety of SMART-P reactor, which is technically supported by Korea Advanced Institute of Science and Technology (KAIST). The COREDAX code would be used for regulatory audit calculations of 3- dimendional core neutronics. The COREDAX code solves the steady-state and timedependent multi-group neutron diffusion equation in hexagonal geometry as well as rectangular geometry by analytic function expansion nodal (AFEN) method. AFEN method was developed at KAIST, and it was internationally verified that its accuracy is excellent. The COREDAX code is originally programmed based on the AFEN method. Accuracy of the code on the AFEN method was excellent for the hexagonal 2-dimensional problems, but there was a need for improvement for hexagonal-z 3-dimensional problems. Hence, several solution routines of the AFEN method are improved, and finally the advanced AFEN method is created. COREDAX code is based on the advanced AFEN method . The initial version of COREDAX code is to complete a basic framework, performing eigenvalue calculations and kinetics calculations with thermal-hydraulic feedbacks, for audit calculations of steady-state core design and reactivity-induced accidents of SMART-P reactor. This study describes the COREDAX code for hexagonal geometry.

  7. The Genomic Code: Genome Evolution and Potential Applications

    KAUST Repository

    Bernardi, Giorgio

    2016-01-25

    The genome of metazoans is organized according to a genomic code which comprises three laws: 1) Compositional correlations hold between contiguous coding and non-coding sequences, as well as among the three codon positions of protein-coding genes; these correlations are the consequence of the fact that the genomes under consideration consist of fairly homogeneous, long (≥200Kb) sequences, the isochores; 2) Although isochores are defined on the basis of purely compositional properties, GC levels of isochores are correlated with all tested structural and functional properties of the genome; 3) GC levels of isochores are correlated with chromosome architecture from interphase to metaphase; in the case of interphase the correlation concerns isochores and the three-dimensional “topological associated domains” (TADs); in the case of mitotic chromosomes, the correlation concerns isochores and chromosomal bands. Finally, the genomic code is the fourth and last pillar of molecular biology, the first three pillars being 1) the double helix structure of DNA; 2) the regulation of gene expression in prokaryotes; and 3) the genetic code.

  8. Retrotransposons and non-protein coding RNAs

    DEFF Research Database (Denmark)

    Mourier, Tobias; Willerslev, Eske

    2009-01-01

    does not merely represent spurious transcription. We review examples of functional RNAs transcribed from retrotransposons, and address the collection of non-protein coding RNAs derived from transposable element sequences, including numerous human microRNAs and the neuronal BC RNAs. Finally, we review...

  9. Distinct timescales of population coding across cortex.

    Science.gov (United States)

    Runyan, Caroline A; Piasini, Eugenio; Panzeri, Stefano; Harvey, Christopher D

    2017-08-03

    The cortex represents information across widely varying timescales. For instance, sensory cortex encodes stimuli that fluctuate over few tens of milliseconds, whereas in association cortex behavioural choices can require the maintenance of information over seconds. However, it remains poorly understood whether diverse timescales result mostly from features intrinsic to individual neurons or from neuronal population activity. This question remains unanswered, because the timescales of coding in populations of neurons have not been studied extensively, and population codes have not been compared systematically across cortical regions. Here we show that population codes can be essential to achieve long coding timescales. Furthermore, we find that the properties of population codes differ between sensory and association cortices. We compared coding for sensory stimuli and behavioural choices in auditory cortex and posterior parietal cortex as mice performed a sound localization task. Auditory stimulus information was stronger in auditory cortex than in posterior parietal cortex, and both regions contained choice information. Although auditory cortex and posterior parietal cortex coded information by tiling in time neurons that were transiently informative for approximately 200 milliseconds, the areas had major differences in functional coupling between neurons, measured as activity correlations that could not be explained by task events. Coupling among posterior parietal cortex neurons was strong and extended over long time lags, whereas coupling among auditory cortex neurons was weak and short-lived. Stronger coupling in posterior parietal cortex led to a population code with long timescales and a representation of choice that remained consistent for approximately 1 second. In contrast, auditory cortex had a code with rapid fluctuations in stimulus and choice information over hundreds of milliseconds. Our results reveal that population codes differ across cortex

  10. A Fast Optimization Method for General Binary Code Learning.

    Science.gov (United States)

    Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng

    2016-09-22

    Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.

  11. High efficiency video coding coding tools and specification

    CERN Document Server

    Wien, Mathias

    2015-01-01

    The video coding standard High Efficiency Video Coding (HEVC) targets at improved compression performance for video resolutions of HD and beyond, providing Ultra HD video at similar compressed bit rates as for HD video encoded with the well-established video coding standard H.264 | AVC. Based on known concepts, new coding structures and improved coding tools have been developed and specified in HEVC. The standard is expected to be taken up easily by established industry as well as new endeavors, answering the needs of todays connected and ever-evolving online world. This book presents the High Efficiency Video Coding standard and explains it in a clear and coherent language. It provides a comprehensive and consistently written description, all of a piece. The book targets at both, newbies to video coding as well as experts in the field. While providing sections with introductory text for the beginner, it suits as a well-arranged reference book for the expert. The book provides a comprehensive reference for th...

  12. A finite range coupled channel Born approximation code

    International Nuclear Information System (INIS)

    Nagel, P.; Koshel, R.D.

    1978-01-01

    The computer code OUKID calculates differential cross sections for direct transfer nuclear reactions in which multistep processes, arising from strongly coupled inelastic states in both the target and residual nuclei, are possible. The code is designed for heavy ion reactions where full finite range and recoil effects are important. Distorted wave functions for the elastic and inelastic scattering are calculated by solving sets of coupled differential equations using a Matrix Numerov integration procedure. These wave functions are then expanded into bases of spherical Bessel functions by the plane-wave expansion method. This approach allows the six-dimensional integrals for the transition amplitude to be reduced to products of two one-dimensional integrals. Thus, the inelastic scattering is treated in a coupled channel formalism while the transfer process is treated in a finite range born approximation formalism. (Auth.)

  13. Fault tree analysis. Implementation of the WAM-codes

    International Nuclear Information System (INIS)

    Bento, J.P.; Poern, K.

    1979-07-01

    The report describes work going on at Studsvik at the implementation of the WAM code package for fault tree analysis. These codes originally developed under EPRI contract by Sciences Applications Inc, allow, in contrast with other fault tree codes, all Boolean operations, thus allowing modeling of ''NOT'' conditions and dependent components. To concretize the implementation of these codes, the auxiliary feed-water system of the Swedish BWR Oskarshamn 2 was chosen for the reliability analysis. For this system, both the mean unavailability and the probability density function of the top event - undesired event - of the system fault tree were calculated, the latter using a Monte-Carlo simulation technique. The present study is the first part of a work performed under contract with the Swedish Nuclear Power Inspectorate. (author)

  14. Functions and Functional Preferences of Code Switching: A Case Study at a Private K-8 School in Turkish Context

    Directory of Open Access Journals (Sweden)

    Karolin DEMIRCI

    2015-06-01

    Full Text Available Due to the changes in the approaches and methods in English language teaching throughout history, the use of mother tongue (L1 has been one of the most important topics discussed in the foreign language teaching field. Although most of the approaches used nowadays do not support the use of mother tongue, there is a change in the perception towards teachers’ code-switching in foreign language (L2 learning classrooms. There are various recent studies suggesting that using mother tongue facilitates foreign language learning. In this respect, the purpose of this study was to examine the teachers’ and students’ perceptions towards L1 use in L2 classrooms and under which circumstances they preferred using mother tongue. In addition, learners’ preferences of teachers’ code switching were also analyzed. Both teachers’ and students’ perceptions and beliefs on code switching were investigated using the questionnaires, classroom observations and interviews as data collection tools. There were 2 hour-observation periods (90 minutes in the classrooms of 2nd grade, 4th grade and 7th grade in which the circumstances of L1 use were analyzed to determine if there were any common characteristics of the L1 use. Findings of the revealed that there were some common circumstances where teachers code-switched to facilitate learning in the classroom and that students had some clear preferences for their teachers to switch to L1.

  15. Converter of a continuous code into the Grey code

    International Nuclear Information System (INIS)

    Gonchar, A.I.; TrUbnikov, V.R.

    1979-01-01

    Described is a converter of a continuous code into the Grey code used in a 12-charged precision amplitude-to-digital converter to decrease the digital component of spectrometer differential nonlinearity to +0.7% in the 98% range of the measured band. To construct the converter of a continuous code corresponding to the input signal amplitude into the Grey code used is the regularity in recycling of units and zeroes in each discharge of the Grey code in the case of a continuous change of the number of pulses of a continuous code. The converter is constructed on the elements of 155 series, the frequency of continuous code pulse passing at the converter input is 25 MHz

  16. Improvement of blow down model for LEAP code

    International Nuclear Information System (INIS)

    Itooka, Satoshi; Fujimata, Kazuhiro

    2003-03-01

    In Japan Nuclear Cycle Development Institute, the improvement of analysis method for overheating tube rapture was studied for the accident of sodium-water reactions in the steam generator of a fast breeder reactor and the evaluation of heat transfer condition in the tube were carried out based on study of critical heat flux (CHF) and post-CHF heat transfer equation in Light Water Reactors. In this study, the improvement of blow down model for the LEAP code was carried out taking into consideration the above-mentioned evaluation of heat transfer condition. Improvements of the LEAP code were following items. Calculations and verification were performed with the improved LEAP code in order to confirm the code functions. The addition of critical heat flux (CHF) by the formula of Katto and the formula of Tong. The addition of post-CHF heat transfer equation by the formula of Condie-BengstonIV and the formula of Groeneveld 5.9. The physical properties of the water and steam are expanded to the critical conditions of the water. The expansion of the total number of section and the improvement of the input form. The addition of the function to control the valve setting by the PID control model. (author)

  17. STADIC: a computer code for combining probability distributions

    International Nuclear Information System (INIS)

    Cairns, J.J.; Fleming, K.N.

    1977-03-01

    The STADIC computer code uses a Monte Carlo simulation technique for combining probability distributions. The specific function for combination of the input distribution is defined by the user by introducing the appropriate FORTRAN statements to the appropriate subroutine. The code generates a Monte Carlo sampling from each of the input distributions and combines these according to the user-supplied function to provide, in essence, a random sampling of the combined distribution. When the desired number of samples is obtained, the output routine calculates the mean, standard deviation, and confidence limits for the resultant distribution. This method of combining probability distributions is particularly useful in cases where analytical approaches are either too difficult or undefined

  18. Improvement of the MSG code for the MONJU evaporators. Additional function of reverse flow calculation on water/steam model and animation for post processing

    International Nuclear Information System (INIS)

    Toda, Shin-ichi; Yoshikawa, Shinji; Oketani, Kazuhiro

    2003-05-01

    The improved version of the MSG code (Multi-dimensional Thermal-hydraulic Analysis Code for Steam Generators) has been released. It has been carried out to improve based on the original version in order to calculate reverse flow on water/steam side, and to animate the post-processing data. To calculate reverse flow locally, modification to set pressure at each divided node point of water/steam region in the helical-coil heat transfer tubes has been carried out. And the matrix solver has been also improved to treat a problem within practical calculation time against increasing the pressure points. In this case pressure and enthalpy have to be calculated simultaneously, however, it was found out that using the block-Jacobean method make a diagonal-dominant matrix, and solve the matrix efficiently with a relaxation method. As the result of calculations of a steady-state condition and a transient of SG blow down with manual trip operation, the improvement on calculation function of the MSG code was confirmed. And an animation function of temperature contour in the sodium shell side as a post processing has been added. Since the animation is very effective to understand thermal-hydraulic behavior on the sodium shell side of the SG, especially in case of transient condition, the analysis and evaluation of the calculation results will be enabled to be more quickly and effectively. (author)

  19. Hierarchical differences in population coding within auditory cortex.

    Science.gov (United States)

    Downer, Joshua D; Niwa, Mamiko; Sutter, Mitchell L

    2017-08-01

    Most models of auditory cortical (AC) population coding have focused on primary auditory cortex (A1). Thus our understanding of how neural coding for sounds progresses along the cortical hierarchy remains obscure. To illuminate this, we recorded from two AC fields: A1 and middle lateral belt (ML) of rhesus macaques. We presented amplitude-modulated (AM) noise during both passive listening and while the animals performed an AM detection task ("active" condition). In both fields, neurons exhibit monotonic AM-depth tuning, with A1 neurons mostly exhibiting increasing rate-depth functions and ML neurons approximately evenly distributed between increasing and decreasing functions. We measured noise correlation ( r noise ) between simultaneously recorded neurons and found that whereas engagement decreased average r noise in A1, engagement increased average r noise in ML. This finding surprised us, because attentive states are commonly reported to decrease average r noise We analyzed the effect of r noise on AM coding in both A1 and ML and found that whereas engagement-related shifts in r noise in A1 enhance AM coding, r noise shifts in ML have little effect. These results imply that the effect of r noise differs between sensory areas, based on the distribution of tuning properties among the neurons within each population. A possible explanation of this is that higher areas need to encode nonsensory variables (e.g., attention, choice, and motor preparation), which impart common noise, thus increasing r noise Therefore, the hierarchical emergence of r noise -robust population coding (e.g., as we observed in ML) enhances the ability of sensory cortex to integrate cognitive and sensory information without a loss of sensory fidelity. NEW & NOTEWORTHY Prevailing models of population coding of sensory information are based on a limited subset of neural structures. An important and under-explored question in neuroscience is how distinct areas of sensory cortex differ in their

  20. Highly conserved non-coding sequences are associated with vertebrate development.

    Directory of Open Access Journals (Sweden)

    Adam Woolfe

    2005-01-01

    Full Text Available In addition to protein coding sequence, the human genome contains a significant amount of regulatory DNA, the identification of which is proving somewhat recalcitrant to both in silico and functional methods. An approach that has been used with some success is comparative sequence analysis, whereby equivalent genomic regions from different organisms are compared in order to identify both similarities and differences. In general, similarities in sequence between highly divergent organisms imply functional constraint. We have used a whole-genome comparison between humans and the pufferfish, Fugu rubripes, to identify nearly 1,400 highly conserved non-coding sequences. Given the evolutionary divergence between these species, it is likely that these sequences are found in, and furthermore are essential to, all vertebrates. Most, and possibly all, of these sequences are located in and around genes that act as developmental regulators. Some of these sequences are over 90% identical across more than 500 bases, being more highly conserved than coding sequence between these two species. Despite this, we cannot find any similar sequences in invertebrate genomes. In order to begin to functionally test this set of sequences, we have used a rapid in vivo assay system using zebrafish embryos that allows tissue-specific enhancer activity to be identified. Functional data is presented for highly conserved non-coding sequences associated with four unrelated developmental regulators (SOX21, PAX6, HLXB9, and SHH, in order to demonstrate the suitability of this screen to a wide range of genes and expression patterns. Of 25 sequence elements tested around these four genes, 23 show significant enhancer activity in one or more tissues. We have identified a set of non-coding sequences that are highly conserved throughout vertebrates. They are found in clusters across the human genome, principally around genes that are implicated in the regulation of development

  1. LATTICE: an interactive lattice computer code

    International Nuclear Information System (INIS)

    Staples, J.

    1976-10-01

    LATTICE is a computer code which enables an interactive user to calculate the functions of a synchrotron lattice. This program satisfies the requirements at LBL for a simple interactive lattice program by borrowing ideas from both TRANSPORT and SYNCH. A fitting routine is included

  2. The FOCON96 1.0 computer code

    International Nuclear Information System (INIS)

    Merle-Szeremeta, A.; Thomassin, A.

    1999-01-01

    The Institute of Protection and Nuclear Safety (I.P.S.N.) has developed a computer code, FOCON96 1.0 to calculate the dosimetric consequences of atmospheric radioactive releases from nuclear installations after several years of usual operation. This communication describes the principal characteristics of FOCON96 1.0 and its functionalities. The principal elements of a comparison between FOCON96 1.0 and PC-CREAM ( European computer code developed by the N.R.P.B. and answering the same criteria) are given here. (N.C.)

  3. Improvement of multi-dimensional realistic thermal-hydraulic system analysis code, MARS 1.3

    International Nuclear Information System (INIS)

    Lee, Won Jae; Chung, Bub Dong; Jeong, Jae Jun; Ha, Kwi Seok

    1998-09-01

    The MARS (Multi-dimensional Analysis of Reactor Safety) code is a multi-dimensional, best-estimate thermal-hydraulic system analysis code. This report describes the new features that have been improved in the MARS 1.3 code since the release of MARS 1.3 in July 1998. The new features include: - implementation of point kinetics model into the 3D module - unification of the heat structure model - extension of the control function to the 3D module variables - improvement of the 3D module input check function. Each of the items has been implemented in the developmental version of the MARS 1.3.1 code and, then, independently verified and assessed. The effectiveness of the new features is well verified and it is shown that these improvements greatly extend the code capability and enhance the user friendliness. Relevant input data changes are also described. In addition to the improvements, this report briefly summarizes the future code developmental activities that are being carried out or planned, such as coupling of MARS 1.3 with the containment code CONTEMPT and the three-dimensional reactor kinetics code MASTER 2.0. (author). 8 refs

  4. Improvement of multi-dimensional realistic thermal-hydraulic system analysis code, MARS 1.3

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Won Jae; Chung, Bub Dong; Jeong, Jae Jun; Ha, Kwi Seok

    1998-09-01

    The MARS (Multi-dimensional Analysis of Reactor Safety) code is a multi-dimensional, best-estimate thermal-hydraulic system analysis code. This report describes the new features that have been improved in the MARS 1.3 code since the release of MARS 1.3 in July 1998. The new features include: - implementation of point kinetics model into the 3D module - unification of the heat structure model - extension of the control function to the 3D module variables - improvement of the 3D module input check function. Each of the items has been implemented in the developmental version of the MARS 1.3.1 code and, then, independently verified and assessed. The effectiveness of the new features is well verified and it is shown that these improvements greatly extend the code capability and enhance the user friendliness. Relevant input data changes are also described. In addition to the improvements, this report briefly summarizes the future code developmental activities that are being carried out or planned, such as coupling of MARS 1.3 with the containment code CONTEMPT and the three-dimensional reactor kinetics code MASTER 2.0. (author). 8 refs.

  5. Automated uncertainty analysis methods in the FRAP computer codes

    International Nuclear Information System (INIS)

    Peck, S.O.

    1980-01-01

    A user oriented, automated uncertainty analysis capability has been incorporated in the Fuel Rod Analysis Program (FRAP) computer codes. The FRAP codes have been developed for the analysis of Light Water Reactor fuel rod behavior during steady state (FRAPCON) and transient (FRAP-T) conditions as part of the United States Nuclear Regulatory Commission's Water Reactor Safety Research Program. The objective of uncertainty analysis of these codes is to obtain estimates of the uncertainty in computed outputs of the codes is to obtain estimates of the uncertainty in computed outputs of the codes as a function of known uncertainties in input variables. This paper presents the methods used to generate an uncertainty analysis of a large computer code, discusses the assumptions that are made, and shows techniques for testing them. An uncertainty analysis of FRAP-T calculated fuel rod behavior during a hypothetical loss-of-coolant transient is presented as an example and carried through the discussion to illustrate the various concepts

  6. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  7. Entanglement-assisted quantum MDS codes constructed from negacyclic codes

    Science.gov (United States)

    Chen, Jianzhang; Huang, Yuanyuan; Feng, Chunhui; Chen, Riqing

    2017-12-01

    Recently, entanglement-assisted quantum codes have been constructed from cyclic codes by some scholars. However, how to determine the number of shared pairs required to construct entanglement-assisted quantum codes is not an easy work. In this paper, we propose a decomposition of the defining set of negacyclic codes. Based on this method, four families of entanglement-assisted quantum codes constructed in this paper satisfy the entanglement-assisted quantum Singleton bound, where the minimum distance satisfies q+1 ≤ d≤ n+2/2. Furthermore, we construct two families of entanglement-assisted quantum codes with maximal entanglement.

  8. Transmission imaging with a coded source

    International Nuclear Information System (INIS)

    Stoner, W.W.; Sage, J.P.; Braun, M.; Wilson, D.T.; Barrett, H.H.

    1976-01-01

    The conventional approach to transmission imaging is to use a rotating anode x-ray tube, which provides the small, brilliant x-ray source needed to cast sharp images of acceptable intensity. Stationary anode sources, although inherently less brilliant, are more compatible with the use of large area anodes, and so they can be made more powerful than rotating anode sources. Spatial modulation of the source distribution provides a way to introduce detailed structure in the transmission images cast by large area sources, and this permits the recovery of high resolution images, in spite of the source diameter. The spatial modulation is deliberately chosen to optimize recovery of image structure; the modulation pattern is therefore called a ''code.'' A variety of codes may be used; the essential mathematical property is that the code possess a sharply peaked autocorrelation function, because this property permits the decoding of the raw image cast by th coded source. Random point arrays, non-redundant point arrays, and the Fresnel zone pattern are examples of suitable codes. This paper is restricted to the case of the Fresnel zone pattern code, which has the unique additional property of generating raw images analogous to Fresnel holograms. Because the spatial frequency of these raw images are extremely coarse compared with actual holograms, a photoreduction step onto a holographic plate is necessary before the decoded image may be displayed with the aid of coherent illumination

  9. A simplified Suomi NPP VIIRS dust detection algorithm

    Science.gov (United States)

    Yang, Yikun; Sun, Lin; Zhu, Jinshan; Wei, Jing; Su, Qinghua; Sun, Wenxiao; Liu, Fangwei; Shu, Meiyan

    2017-11-01

    Due to the complex characteristics of dust and sparse ground-based monitoring stations, dust monitoring is facing severe challenges, especially in dust storm-prone areas. Aim at constructing a high-precision dust storm detection model, a pixel database, consisted of dusts over a variety of typical feature types such as cloud, vegetation, Gobi and ice/snow, was constructed, and their distributions of reflectance and Brightness Temperatures (BT) were analysed, based on which, a new Simplified Dust Detection Algorithm (SDDA) for the Suomi National Polar-Orbiting Partnership Visible infrared Imaging Radiometer (NPP VIIRS) is proposed. NPP VIIRS images covering the northern China and Mongolian regions, where features serious dust storms, were selected to perform the dust detection experiments. The monitoring results were compared with the true colour composite images, and results showed that most of the dust areas can be accurately detected, except for fragmented thin dusts over bright surfaces. The dust ground-based measurements obtained from the Meteorological Information Comprehensive Analysis and Process System (MICAPS) and the Ozone Monitoring Instrument Aerosol Index (OMI AI) products were selected for comparison purposes. Results showed that the dust monitoring results agreed well in the spatial distribution with OMI AI dust products and the MICAPS ground-measured data with an average high accuracy of 83.10%. The SDDA is relatively robust and can realize automatic monitoring for dust storms.

  10. Overview of the ArbiTER edge plasma eigenvalue code

    Science.gov (United States)

    Baver, Derek; Myra, James; Umansky, Maxim

    2011-10-01

    The Arbitrary Topology Equation Reader, or ArbiTER, is a flexible eigenvalue solver that is currently under development for plasma physics applications. The ArbiTER code builds on the equation parser framework of the existing 2DX code, extending it to include a topology parser. This will give the code the capability to model problems with complicated geometries (such as multiple X-points and scrape-off layers) or model equations with arbitrary numbers of dimensions (e.g. for kinetic analysis). In the equation parser framework, model equations are not included in the program's source code. Instead, an input file contains instructions for building a matrix from profile functions and elementary differential operators. The program then executes these instructions in a sequential manner. These instructions may also be translated into analytic form, thus giving the code transparency as well as flexibility. We will present an overview of how the ArbiTER code is to work, as well as preliminary results from early versions of this code. Work supported by the U.S. DOE.

  11. Validation of the reactor dynamics code HEXTRAN

    International Nuclear Information System (INIS)

    Kyrki-Rajamaeki, R.

    1994-05-01

    HEXTRAN is a new three-dimensional, hexagonal reactor dynamics code developed in the Technical Research Centre of Finland (VTT) for VVER type reactors. This report describes the validation work of HEXTRAN. The work has been made with the financing of the Finnish Centre for Radiation and Nuclear Safety (STUK). HEXTRAN is particularly intended for calculation of such accidents, in which radially asymmetric phenomena are included and both good neutron dynamics and two-phase thermal hydraulics are important. HEXTRAN is based on already validated codes. The models of these codes have been shown to function correctly also within the HEXTRAN code. The main new model of HEXTRAN, the spatial neutron kinetics model has been successfully validated against LR-0 test reactor and Loviisa plant measurements. Connected with SMABRE, HEXTRAN can be reliably used for calculation of transients including effects of the whole cooling system of VVERs. Further validation plans are also introduced in the report. (orig.). (23 refs., 16 figs., 2 tabs.)

  12. JAERI thermal reactor standard code system for reactor design and analysis SRAC

    International Nuclear Information System (INIS)

    Tsuchihashi, Keichiro

    1985-01-01

    SRAC, JAERI thermal reactor standard code system for reactor design and analysis, developed in Japan Atomic Energy Research Institute, is for all types of thermal neutron nuclear design and analysis. The code system has undergone extensive verifications to confirm its functions, and has been used in core modification of the research reactor, detailed design of the multi-purpose high temperature gas reactor and analysis of the experiment with a critical assembly. In nuclear calculation with the code system, multi-group lattice calculation is first made with the libraries. Then, with the resultant homogeneous equivalent group constants, reactor core calculation is made. Described are the following: purpose and development of the code system, functions of the SRAC system, bench mark tests and usage state and future development. (Mori, K.)

  13. Coding considerations for standalone molecular dynamics simulations of atomistic structures

    Science.gov (United States)

    Ocaya, R. O.; Terblans, J. J.

    2017-10-01

    The laws of Newtonian mechanics allow ab-initio molecular dynamics to model and simulate particle trajectories in material science by defining a differentiable potential function. This paper discusses some considerations for the coding of ab-initio programs for simulation on a standalone computer and illustrates the approach by C language codes in the context of embedded metallic atoms in the face-centred cubic structure. The algorithms use velocity-time integration to determine particle parameter evolution for up to several thousands of particles in a thermodynamical ensemble. Such functions are reusable and can be placed in a redistributable header library file. While there are both commercial and free packages available, their heuristic nature prevents dissection. In addition, developing own codes has the obvious advantage of teaching techniques applicable to new problems.

  14. Consideration of the Construction Code for TBM-body in ASME BPVC

    International Nuclear Information System (INIS)

    Kim, Dongjun; Kim, Yunjae; Kim, Suk Kwon; Park, Sung Dae; Lee, Dong Won

    2016-01-01

    In this paper, ASME code is briefly introduced, and the TBM-body is classified for selecting the ASME section. With the classification of TBM-body, the appropriate section is determined. Helium Cooled Ceramic Reflector (HCCR) Test Blanket System (TBS) has been designed to research on the functions of breeding blanket by KO TBM team. The functions has three subjects as 1) Tritium breeding, 2) Heat conversion and extraction, and 3) Neutron and Gamma-ray shielding. For the process of design, it is needed to select the appropriate construction code as the design criteria. ITER Organization (IO) has proposed that RCC-MR Edition 2007 ver. shall be used for TBM-shield. Because the TBM-shield is connected to the vacuum boundary. For the other part of TBM-set, TBM-body, there is no constraint on the selected code, and the manufacturer can appropriately select the construction code to apply design and fabrication parts. KO TBM Team has considered whether it is appropriate to choose any code for TBM-body. One of the things is ASME code. The advantage of ASME choice is suitable to the domestic status. In the domestic nuclear plant, ASME or KEPIC code is used as regulatory requirements. Based on this, it is possible to prepare a domestic fusion plant regulatory. In this paper, the construction code of TBM-body was determined in ASME BPVC. For the determination of code, the structure of ASME BPVC was introduced and the classification for TBM-body was conducted by the ITER criteria. And the operation conditions of TBM-body that contained creep and irradiation effects was considered to determine the construction code

  15. Consideration of the Construction Code for TBM-body in ASME BPVC

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dongjun; Kim, Yunjae [Korea Univ., Seoul (Korea, Republic of); Kim, Suk Kwon; Park, Sung Dae; Lee, Dong Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    In this paper, ASME code is briefly introduced, and the TBM-body is classified for selecting the ASME section. With the classification of TBM-body, the appropriate section is determined. Helium Cooled Ceramic Reflector (HCCR) Test Blanket System (TBS) has been designed to research on the functions of breeding blanket by KO TBM team. The functions has three subjects as 1) Tritium breeding, 2) Heat conversion and extraction, and 3) Neutron and Gamma-ray shielding. For the process of design, it is needed to select the appropriate construction code as the design criteria. ITER Organization (IO) has proposed that RCC-MR Edition 2007 ver. shall be used for TBM-shield. Because the TBM-shield is connected to the vacuum boundary. For the other part of TBM-set, TBM-body, there is no constraint on the selected code, and the manufacturer can appropriately select the construction code to apply design and fabrication parts. KO TBM Team has considered whether it is appropriate to choose any code for TBM-body. One of the things is ASME code. The advantage of ASME choice is suitable to the domestic status. In the domestic nuclear plant, ASME or KEPIC code is used as regulatory requirements. Based on this, it is possible to prepare a domestic fusion plant regulatory. In this paper, the construction code of TBM-body was determined in ASME BPVC. For the determination of code, the structure of ASME BPVC was introduced and the classification for TBM-body was conducted by the ITER criteria. And the operation conditions of TBM-body that contained creep and irradiation effects was considered to determine the construction code.

  16. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  17. Study of nuclear computer code maintenance and management system

    International Nuclear Information System (INIS)

    Ryu, Chang Mo; Kim, Yeon Seung; Eom, Heung Seop; Lee, Jong Bok; Kim, Ho Joon; Choi, Young Gil; Kim, Ko Ryeo

    1989-01-01

    Software maintenance is one of the most important problems since late 1970's.We wish to develop a nuclear computer code system to maintenance and manage KAERI's nuclear software. As a part of this system, we have developed three code management programs for use on CYBER and PC systems. They are used in systematic management of computer code in KAERI. The first program is embodied on the CYBER system to rapidly provide information on nuclear codes to the users. The second and the third programs were embodied on the PC system for the code manager and for the management of data in korean language, respectively. In the requirement analysis, we defined each code, magnetic tape, manual and abstract information data. In the conceptual design, we designed retrieval, update, and output functions. In the implementation design, we described the technical considerations of database programs, utilities, and directions for the use of databases. As a result of this research, we compiled the status of nuclear computer codes which belonged KAERI until September, 1988. Thus, by using these three database programs, we could provide the nuclear computer code information to the users more rapidly. (Author)

  18. Development of dynamic simulation code for fuel cycle of fusion reactor

    International Nuclear Information System (INIS)

    Aoki, Isao; Seki, Yasushi; Sasaki, Makoto; Shintani, Kiyonori; Kim, Yeong-Chan

    1999-02-01

    A dynamic simulation code for fuel cycle of a fusion experimental reactor has been developed. The code follows the fuel inventory change with time in the plasma chamber and the fuel cycle system during 2 days pulse operation cycles. The time dependence of the fuel inventory distribution is evaluated considering the fuel burn and exhaust in the plasma chamber, purification and supply functions. For each subsystem of the plasma chamber and the fuel cycle system, the fuel inventory equation is written based on the equation of state considering the fuel burn and the function of exhaust, purification, and supply. The processing constants of subsystem for steady states were taken from the values in the ITER Conceptual Design Activity (CDA) report. Using this code, the time dependence of the fuel supply and inventory depending on the burn state and subsystem processing functions are shown. (author)

  19. GRABGAM: A Gamma Analysis Code for Ultra-Low-Level HPGe SPECTRA

    Energy Technology Data Exchange (ETDEWEB)

    Winn, W.G.

    1999-07-28

    The GRABGAM code has been developed for analysis of ultra-low-level HPGe gamma spectra. The code employs three different size filters for the peak search, where the largest filter provides best sensitivity for identifying low-level peaks and the smallest filter has the best resolution for distinguishing peaks within a multiplet. GRABGAM basically generates an integral probability F-function for each singlet or multiplet peak analysis, bypassing the usual peak fitting analysis for a differential f-function probability model. Because F is defined by the peak data, statistical limitations for peak fitting are avoided; however, the F-function does provide generic values for peak centroid, full width at half maximum, and tail that are consistent with a Gaussian formalism. GRABGAM has successfully analyzed over 10,000 customer samples, and it interfaces with a variety of supplementary codes for deriving detector efficiencies, backgrounds, and quality checks.

  20. Accurate evaluation of the Kochin function for added resistance using a high-order finite difference-based seakeeping code

    DEFF Research Database (Denmark)

    Amini-Afshar, Mostafa; Bingham, Harry B.

    by a numerical integration over the surface of the body. Motivated by discussions with Prof. Kashiwagi during this workshop (Kashiwagi, 2017), we subsequently applied the Hanaoka transformation (Maruo, 1960) to change the integration domain from Θ to a wave-number like variable m. This allows a method developed......At the 32nd IWWWFB in Dalian, we presented our implementation of the far-field method for second-order wave drift forces based on the Kochin function, using the open-source seakeeping codeOceanWave3D-Seakeeping. In that work we used Maruo's method (Maruo, 1960), and calculated the added resistance...... by a line integral along the azimuthal angle XX around the body in the far-field. Some difficulties were encountered with regard to evaluating the singular and improper integrals, together with identifying the highest frequency limit where we can practically and reliably calculate the Kochin function...

  1. Turbo-Gallager Codes: The Emergence of an Intelligent Coding ...

    African Journals Online (AJOL)

    Today, both turbo codes and low-density parity-check codes are largely superior to other code families and are being used in an increasing number of modern communication systems including 3G standards, satellite and deep space communications. However, the two codes have certain distinctive characteristics that ...

  2. TASS code topical report. V.1 TASS code technical manual

    International Nuclear Information System (INIS)

    Sim, Suk K.; Chang, W. P.; Kim, K. D.; Kim, H. C.; Yoon, H. Y.

    1997-02-01

    TASS 1.0 code has been developed at KAERI for the initial and reload non-LOCA safety analysis for the operating PWRs as well as the PWRs under construction in Korea. TASS code will replace various vendor's non-LOCA safety analysis codes currently used for the Westinghouse and ABB-CE type PWRs in Korea. This can be achieved through TASS code input modifications specific to each reactor type. The TASS code can be run interactively through the keyboard operation. A simimodular configuration used in developing the TASS code enables the user easily implement new models. TASS code has been programmed using FORTRAN77 which makes it easy to install and port for different computer environments. The TASS code can be utilized for the steady state simulation as well as the non-LOCA transient simulations such as power excursions, reactor coolant pump trips, load rejections, loss of feedwater, steam line breaks, steam generator tube ruptures, rod withdrawal and drop, and anticipated transients without scram (ATWS). The malfunctions of the control systems, components, operator actions and the transients caused by the malfunctions can be easily simulated using the TASS code. This technical report describes the TASS 1.0 code models including reactor thermal hydraulic, reactor core and control models. This TASS code models including reactor thermal hydraulic, reactor core and control models. This TASS code technical manual has been prepared as a part of the TASS code manual which includes TASS code user's manual and TASS code validation report, and will be submitted to the regulatory body as a TASS code topical report for a licensing non-LOCA safety analysis for the Westinghouse and ABB-CE type PWRs operating and under construction in Korea. (author). 42 refs., 29 tabs., 32 figs

  3. Flexibility of the genetic code with respect to DNA structure

    DEFF Research Database (Denmark)

    Baisnée, P. F.; Baldi, Pierre; Brunak, Søren

    2001-01-01

    Motivation. The primary function of DNA is to carry genetic information through the genetic code. DNA, however, contains a variety of other signals related, for instance, to reading frame, codon bias, pairwise codon bias, splice sites and transcription regulation, nucleosome positioning and DNA...... structure. Here we study the relationship between the genetic code and DNA structure and address two questions. First, to which degree does the degeneracy of the genetic code and the acceptable amino acid substitution patterns allow for the superimposition of DNA structural signals to protein coding...... sequences? Second, is the origin or evolution of the genetic code likely to have been constrained by DNA structure? Results. We develop an index for code flexibility with respect to DNA structure. Using five different di- or tri-nucleotide models of sequence-dependent DNA structure, we show...

  4. Temporal Coding of Volumetric Imagery

    Science.gov (United States)

    Llull, Patrick Ryan

    'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration

  5. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Thommesen, Christian

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes.......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes....

  6. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    OpenAIRE

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content ...

  7. QR code optical encryption using spatially incoherent illumination

    Science.gov (United States)

    Cheremkhin, P. A.; Krasnov, V. V.; Rodin, V. G.; Starikov, R. S.

    2017-02-01

    Optical encryption is an actively developing field of science. The majority of encryption techniques use coherent illumination and suffer from speckle noise, which severely limits their applicability. The spatially incoherent encryption technique does not have this drawback, but its effectiveness is dependent on the Fourier spectrum properties of the image to be encrypted. The application of a quick response (QR) code in the capacity of a data container solves this problem, and the embedded error correction code also enables errorless decryption. The optical encryption of digital information in the form of QR codes using spatially incoherent illumination was implemented experimentally. The encryption is based on the optical convolution of the image to be encrypted with the kinoform point spread function, which serves as an encryption key. Two liquid crystal spatial light modulators were used in the experimental setup for the QR code and the kinoform imaging, respectively. The quality of the encryption and decryption was analyzed in relation to the QR code size. Decryption was conducted digitally. The successful decryption of encrypted QR codes of up to 129  ×  129 pixels was demonstrated. A comparison with the coherent QR code encryption technique showed that the proposed technique has a signal-to-noise ratio that is at least two times higher.

  8. SGV: a code to evaluate plasma reaction rates to a specified accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Devoto, R.S.; Hanson, J.D.

    1978-09-22

    A FORTRAN code to evaluate binary reaction rates (sigmav) for a plasma to a specified accuracy is described. Distribution functions permitted are (1) two Maxwellian species at different temperatures, (2) beam-Maxwellian, (3) cold gas with Maxwellian, and (4) beam-plasma with mirror distribution of the form f(v) varies as f(v) M (cos theta). Several functional forms are permitted for f(v) and M(cos theta). Cross-section subroutines for a number of interactions involving hydrogen, helium, and electrons are included, as is a routine allowing input of numerical data. The code is written as a subroutine to allow ready incorporation into larger plasma codes.

  9. Codes Over Hyperfields

    Directory of Open Access Journals (Sweden)

    Atamewoue Surdive

    2017-12-01

    Full Text Available In this paper, we define linear codes and cyclic codes over a finite Krasner hyperfield and we characterize these codes by their generator matrices and parity check matrices. We also demonstrate that codes over finite Krasner hyperfields are more interesting for code theory than codes over classical finite fields.

  10. Amino acid codes in mitochondria as possible clues to primitive codes

    Science.gov (United States)

    Jukes, T. H.

    1981-01-01

    Differences between mitochondrial codes and the universal code indicate that an evolutionary simplification has taken place, rather than a return to a more primitive code. However, these differences make it evident that the universal code is not the only code possible, and therefore earlier codes may have differed markedly from the previous code. The present universal code is probably a 'frozen accident.' The change in CUN codons from leucine to threonine (Neurospora vs. yeast mitochondria) indicates that neutral or near-neutral changes occurred in the corresponding proteins when this code change took place, caused presumably by a mutation in a tRNA gene.

  11. Discrete Sparse Coding.

    Science.gov (United States)

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  12. MCB. A continuous energy Monte Carlo burnup simulation code

    International Nuclear Information System (INIS)

    Cetnar, J.; Wallenius, J.; Gudowski, W.

    1999-01-01

    A code for integrated simulation of neutrinos and burnup based upon continuous energy Monte Carlo techniques and transmutation trajectory analysis has been developed. Being especially well suited for studies of nuclear waste transmutation systems, the code is an extension of the well validated MCNP transport program of Los Alamos National Laboratory. Among the advantages of the code (named MCB) is a fully integrated data treatment combined with a time-stepping routine that automatically corrects for burnup dependent changes in reaction rates, neutron multiplication, material composition and self-shielding. Fission product yields are treated as continuous functions of incident neutron energy, using a non-equilibrium thermodynamical model of the fission process. In the present paper a brief description of the code and applied methods are given. (author)

  13. MARS CODE MANUAL VOLUME III - Programmer's Manual

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Hwang, Moon Kyu; Jeong, Jae Jun; Kim, Kyung Doo; Bae, Sung Won; Lee, Young Jin; Lee, Won Jae

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This programmer's manual provides a complete list of overall information of code structure and input/output function of MARS. In addition, brief descriptions for each subroutine and major variables used in MARS are also included in this report, so that this report would be very useful for the code maintenance. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  14. Developmental programming of long non-coding RNAs during postnatal liver maturation in mice.

    Directory of Open Access Journals (Sweden)

    Lai Peng

    Full Text Available The liver is a vital organ with critical functions in metabolism, protein synthesis, and immune defense. Most of the liver functions are not mature at birth and many changes happen during postnatal liver development. However, it is unclear what changes occur in liver after birth, at what developmental stages they occur, and how the developmental processes are regulated. Long non-coding RNAs (lncRNAs are involved in organ development and cell differentiation. Here, we analyzed the transcriptome of lncRNAs in mouse liver from perinatal (day -2 to adult (day 60 by RNA-Sequencing, with an attempt to understand the role of lncRNAs in liver maturation. We found around 15,000 genes expressed, including about 2,000 lncRNAs. Most lncRNAs were expressed at a lower level than coding RNAs. Both coding RNAs and lncRNAs displayed three major ontogenic patterns: enriched at neonatal, adolescent, or adult stages. Neighboring coding and non-coding RNAs showed the trend to exhibit highly correlated ontogenic expression patterns. Gene ontology (GO analysis revealed that some lncRNAs enriched at neonatal ages have their neighbor protein coding genes also enriched at neonatal ages and associated with cell proliferation, immune activation related processes, tissue organization pathways, and hematopoiesis; other lncRNAs enriched at adolescent ages have their neighbor protein coding genes associated with different metabolic processes. These data reveal significant functional transition during postnatal liver development and imply the potential importance of lncRNAs in liver maturation.

  15. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    Science.gov (United States)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  16. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks, R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem

  17. Development of probabilistic fracture mechanics code PASCAL and user's manual

    Energy Technology Data Exchange (ETDEWEB)

    Shibata, Katsuyuki; Onizawa, Kunio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Li, Yinsheng; Kato, Daisuke [Fuji Research Institute Corporation, Tokyo (Japan)

    2001-03-01

    As a part of the aging and structural integrity research for LWR components, a new PFM (Probabilistic Fracture Mechanics) code PASCAL (PFM Analysis of Structural Components in Aging LWR) has been developed since FY1996. This code evaluates the failure probability of an aged reactor pressure vessel subjected to transient loading such as PTS (Pressurized Thermal Shock). The development of the code has been aimed to improve the accuracy and reliability of analysis by introducing new analysis methodologies and algorithms considering the recent development in the fracture mechanics methodologies and computer performance. The code has some new functions in optimized sampling and cell dividing procedure in stratified Monte Carlo simulation, elastic-plastic fracture criterion of R6 method, extension analysis models in semi-elliptical crack, evaluation of effect of thermal annealing and etc. In addition, an input data generator of temperature and stress distribution time histories was also prepared in the code. Functions and performance of the code have been confirmed based on the verification analyses and some case studies on the influence parameters. The present phase of the development will be completed in FY2000. Thus this report provides the user's manual and theoretical background of the code. (author)

  18. Theoretical atomic physics code development I: CATS: Cowan Atomic Structure Code

    International Nuclear Information System (INIS)

    Abdallah, J. Jr.; Clark, R.E.H.; Cowan, R.D.

    1988-12-01

    An adaptation of R.D. Cowan's Atomic Structure program, CATS, has been developed as part of the Theoretical Atomic Physics (TAPS) code development effort at Los Alamos. CATS has been designed to be easy to run and to produce data files that can interface with other programs easily. The CATS produced data files currently include wave functions, energy levels, oscillator strengths, plane-wave-Born electron-ion collision strengths, photoionization cross sections, and a variety of other quantities. This paper describes the use of CATS. 10 refs

  19. Particle tracing code for multispecies gas

    International Nuclear Information System (INIS)

    Eaton, R.R.; Fox, R.L.; Vandevender, W.H.

    1979-06-01

    Details are presented for the development of a computer code designed to calculate the flow of a multispecies gas mixture using particle tracing techniques. The current technique eliminates the need for a full simulation by utilizing local time averaged velocity distribution functions to obtain the dynamic properties for probable collision partners. The development of this concept reduces statistical scatter experienced in conventional Monte Carlo simulations. The technique is applicable to flow problems involving gas mixtures with disparate masses and trace constituents in the Knudsen number, Kn, range from 1.0 to less than 0.01. The resulting code has previously been used to analyze several aerodynamic isotope enrichment devices

  20. Multiple application coded switch development report

    International Nuclear Information System (INIS)

    Bernal, E.L.; Kestly, J.D.

    1979-03-01

    The development of the Multiple Application Coded Switch (MACS) and its related controller are documented; the functional and electrical characteristics are described; the interface requirements defined, and a troubleshooting guide provided. The system was designed for the Safe Secure Trailer System used for secure transportation of nuclear material

  1. Empirical validation of the triple-code model of numerical processing for complex math operations using functional MRI and group Independent Component Analysis of the mental addition and subtraction of fractions.

    Science.gov (United States)

    Schmithorst, Vincent J; Brown, Rhonda Douglas

    2004-07-01

    The suitability of a previously hypothesized triple-code model of numerical processing, involving analog magnitude, auditory verbal, and visual Arabic codes of representation, was investigated for the complex mathematical task of the mental addition and subtraction of fractions. Functional magnetic resonance imaging (fMRI) data from 15 normal adult subjects were processed using exploratory group Independent Component Analysis (ICA). Separate task-related components were found with activation in bilateral inferior parietal, left perisylvian, and ventral occipitotemporal areas. These results support the hypothesized triple-code model corresponding to the activated regions found in the individual components and indicate that the triple-code model may be a suitable framework for analyzing the neuropsychological bases of the performance of complex mathematical tasks. Copyright 2004 Elsevier Inc.

  2. A code system to generate multigroup cross-sections using basic data

    International Nuclear Information System (INIS)

    Garg, S.B.; Kumar, Ashok

    1978-01-01

    For the neutronic studies of nuclear reactors, multigroup cross-sections derived from the basic energy point data are needed. In order to carry out the design based studies, these cross-sections should also incorporate the temperature and fuel concentration effects. To meet these requirements, a code system comprising of RESRES, UNRES, FIGERO, INSCAT, FUNMO, AVER1 and BGPONE codes has been adopted. The function of each of these codes is discussed. (author)

  3. Coding for dummies

    CERN Document Server

    Abraham, Nikhil

    2015-01-01

    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  4. Cooperative optimization and their application in LDPC codes

    Science.gov (United States)

    Chen, Ke; Rong, Jian; Zhong, Xiaochun

    2008-10-01

    Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.

  5. Dynamic Shannon Coding

    OpenAIRE

    Gagie, Travis

    2005-01-01

    We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.

  6. The correspondence between projective codes and 2-weight codes

    NARCIS (Netherlands)

    Brouwer, A.E.; Eupen, van M.J.M.; Tilborg, van H.C.A.; Willems, F.M.J.

    1994-01-01

    The hyperplanes intersecting a 2-weight code in the same number of points obviously form the point set of a projective code. On the other hand, if we have a projective code C, then we can make a 2-weight code by taking the multiset of points E PC with multiplicity "Y(w), where W is the weight of

  7. Development of the code package KASKAD for calculations of WWERs

    International Nuclear Information System (INIS)

    Bolobov, P.A.; Lazarenko, A.P.; Tomilov, M.Ju.

    2008-01-01

    The new version of software package for neutron calculation of WWER cores KASKAD 2007 consists of some calculating and service modules, which are integrated in the common framework. The package is based on the old version, which was expanded with some new functions and the new calculating modules, such as: -the BIPR-2007 code is the new one which performs calculation of power distribution in three-dimensional geometry for 2-group neutron diffusion calculation. This code is based on the BIPR-8KN model, provides all possibilities of BIPR-7A code and uses the same input data; -the PERMAK-2007 code is pin-by-pin few-group multilayer and 3-D code for neutron diffusion calculation; -graphical user interface for input data preparation of the TVS-M code. The report also includes some calculation results obtained with modified version of the KASKAD 2007 package. (Authors)

  8. SUMMARY OF GENERAL WORKING GROUP A+B+D: CODES BENCHMARKING.

    Energy Technology Data Exchange (ETDEWEB)

    WEI, J.; SHAPOSHNIKOVA, E.; ZIMMERMANN, F.; HOFMANN, I.

    2006-05-29

    Computer simulation is an indispensable tool in assisting the design, construction, and operation of accelerators. In particular, computer simulation complements analytical theories and experimental observations in understanding beam dynamics in accelerators. The ultimate function of computer simulation is to study mechanisms that limit the performance of frontier accelerators. There are four goals for the benchmarking of computer simulation codes, namely debugging, validation, comparison and verification: (1) Debugging--codes should calculate what they are supposed to calculate; (2) Validation--results generated by the codes should agree with established analytical results for specific cases; (3) Comparison--results from two sets of codes should agree with each other if the models used are the same; and (4) Verification--results from the codes should agree with experimental measurements. This is the summary of the joint session among working groups A, B, and D of the HI32006 Workshop on computer codes benchmarking.

  9. ELCOS: the PSI code system for LWR core analysis. Part II: user's manual for the fuel assembly code BOXER

    International Nuclear Information System (INIS)

    Paratte, J.M.; Grimm, P.; Hollard, J.M.

    1996-02-01

    ELCOS is a flexible code system for the stationary simulation of light water reactor cores. It consists of the four computer codes ETOBOX, BOXER, CORCOD and SILWER. The user's manual of the second one is presented here. BOXER calculates the neutronics in cartesian geometry. The code can roughly be divided into four stages: - organisation: choice of the modules, file manipulations, reading and checking of input data, - fine group fluxes and condensation: one-dimensional calculation of fluxes and computation of the group constants of homogeneous materials and cells, - two-dimensional calculations: geometrically detailed simulation of the configuration in few energy groups, - burnup: evolution of the nuclide densities as a function of time. This manual shows all input commands which can be used while running the different modules of BOXER. (author) figs., tabs., refs

  10. Quality Improvement of MARS Code and Establishment of Code Coupling

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Jeong, Jae Jun; Kim, Kyung Doo

    2010-04-01

    The improvement of MARS code quality and coupling with regulatory auditing code have been accomplished for the establishment of self-reliable technology based regulatory auditing system. The unified auditing system code was realized also by implementing the CANDU specific models and correlations. As a part of the quality assurance activities, the various QA reports were published through the code assessments. The code manuals were updated and published a new manual which describe the new models and correlations. The code coupling methods were verified though the exercise of plant application. The education-training seminar and technology transfer were performed for the code users. The developed MARS-KS is utilized as reliable auditing tool for the resolving the safety issue and other regulatory calculations. The code can be utilized as a base technology for GEN IV reactor applications

  11. Optimized Method for Generating and Acquiring GPS Gold Codes

    Directory of Open Access Journals (Sweden)

    Khaled Rouabah

    2015-01-01

    Full Text Available We propose a simpler and faster Gold codes generator, which can be efficiently initialized to any desired code, with a minimum delay. Its principle consists of generating only one sequence (code number 1 from which we can produce all the other different signal codes. This is realized by simply shifting this sequence by different delays that are judiciously determined by using the bicorrelation function characteristics. This is in contrast to the classical Linear Feedback Shift Register (LFSR based Gold codes generator that requires, in addition to the shift process, a significant number of logic XOR gates and a phase selector to change the code. The presence of all these logic XOR gates in classical LFSR based Gold codes generator provokes the consumption of an additional time in the generation and acquisition processes. In addition to its simplicity and its rapidity, the proposed architecture, due to the total absence of XOR gates, has fewer resources than the conventional Gold generator and can thus be produced at lower cost. The Digital Signal Processing (DSP implementations have shown that the proposed architecture presents a solution for acquiring Global Positioning System (GPS satellites signals optimally and in a parallel way.

  12. An expanding universe of the non-coding genome in cancer biology.

    Science.gov (United States)

    Xue, Bin; He, Lin

    2014-06-01

    Neoplastic transformation is caused by accumulation of genetic and epigenetic alterations that ultimately convert normal cells into tumor cells with uncontrolled proliferation and survival, unlimited replicative potential and invasive growth [Hanahan,D. et al. (2011) Hallmarks of cancer: the next generation. Cell, 144, 646-674]. Although the majority of the cancer studies have focused on the functions of protein-coding genes, emerging evidence has started to reveal the importance of the vast non-coding genome, which constitutes more than 98% of the human genome. A number of non-coding RNAs (ncRNAs) derived from the 'dark matter' of the human genome exhibit cancer-specific differential expression and/or genomic alterations, and it is increasingly clear that ncRNAs, including small ncRNAs and long ncRNAs (lncRNAs), play an important role in cancer development by regulating protein-coding gene expression through diverse mechanisms. In addition to ncRNAs, nearly half of the mammalian genomes consist of transposable elements, particularly retrotransposons. Once depicted as selfish genomic parasites that propagate at the expense of host fitness, retrotransposon elements could also confer regulatory complexity to the host genomes during development and disease. Reactivation of retrotransposons in cancer, while capable of causing insertional mutagenesis and genome rearrangements to promote oncogenesis, could also alter host gene expression networks to favor tumor development. Taken together, the functional significance of non-coding genome in tumorigenesis has been previously underestimated, and diverse transcripts derived from the non-coding genome could act as integral functional components of the oncogene and tumor suppressor network. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Genome-wide identification of coding and non-coding conserved sequence tags in human and mouse genomes

    Directory of Open Access Journals (Sweden)

    Maggi Giorgio P

    2008-06-01

    Full Text Available Abstract Background The accurate detection of genes and the identification of functional regions is still an open issue in the annotation of genomic sequences. This problem affects new genomes but also those of very well studied organisms such as human and mouse where, despite the great efforts, the inventory of genes and regulatory regions is far from complete. Comparative genomics is an effective approach to address this problem. Unfortunately it is limited by the computational requirements needed to perform genome-wide comparisons and by the problem of discriminating between conserved coding and non-coding sequences. This discrimination is often based (thus dependent on the availability of annotated proteins. Results In this paper we present the results of a comprehensive comparison of human and mouse genomes performed with a new high throughput grid-based system which allows the rapid detection of conserved sequences and accurate assessment of their coding potential. By detecting clusters of coding conserved sequences the system is also suitable to accurately identify potential gene loci. Following this analysis we created a collection of human-mouse conserved sequence tags and carefully compared our results to reliable annotations in order to benchmark the reliability of our classifications. Strikingly we were able to detect several potential gene loci supported by EST sequences but not corresponding to as yet annotated genes. Conclusion Here we present a new system which allows comprehensive comparison of genomes to detect conserved coding and non-coding sequences and the identification of potential gene loci. Our system does not require the availability of any annotated sequence thus is suitable for the analysis of new or poorly annotated genomes.

  14. Evaporation over sump surface in containment studies: code validation on TOSQAN tests

    International Nuclear Information System (INIS)

    Malet, J.; Gelain, T.; Degrees du Lou, O.; Daru, V.

    2011-01-01

    During the course of a severe accident in a Nuclear Power Plant, water can be collected in the sump containment through steam condensation on walls and spray systems activation. The objective of this paper is to present code validation on evaporative sump tests performed on the TOSQAN facility. The ASTEC-CPA code is used as a lumped-parameter code and specific user-defined-functions are developed for the TONUS-CFD code. The tests are air-steam tests, as well as tests with other non-condensable gases (He, CO 2 and SF 6 ) under steady and transient conditions. The results show a good agreement between codes and experiments, indicating a good behaviour of the sump models in both codes. (author)

  15. Interface code between WIMS-AECL and RFSP-IST for coupling computing

    International Nuclear Information System (INIS)

    Xu Liangwang; Liu Yu; Jia Baoshan

    2007-01-01

    A code based on the protocols of Telnet and FTP is developed with C++ for coupling computing between WIMS-AECL and RFSP-IST. the input document of WIMS-AECL and RFSP-ISP cna be generated automatically and be submitted to server, the output document will be downloaded by the end of computing. the function of analyzing standard output document is also included in this code. After simple updating, this code can meet the requirement of other code using input document, e.g. CATHENA. A pilot study of the relation between void fraction and reactivity in TACR, some valuable conclusions has been achieved. (authors)

  16. Decoding the non-coding RNAs in Alzheimer's disease.

    Science.gov (United States)

    Schonrock, Nicole; Götz, Jürgen

    2012-11-01

    Non-coding RNAs (ncRNAs) are integral components of biological networks with fundamental roles in regulating gene expression. They can integrate sequence information from the DNA code, epigenetic regulation and functions of multimeric protein complexes to potentially determine the epigenetic status and transcriptional network in any given cell. Humans potentially contain more ncRNAs than any other species, especially in the brain, where they may well play a significant role in human development and cognitive ability. This review discusses their emerging role in Alzheimer's disease (AD), a human pathological condition characterized by the progressive impairment of cognitive functions. We discuss the complexity of the ncRNA world and how this is reflected in the regulation of the amyloid precursor protein and Tau, two proteins with central functions in AD. By understanding this intricate regulatory network, there is hope for a better understanding of disease mechanisms and ultimately developing diagnostic and therapeutic tools.

  17. SCAMPI: A code package for cross-section processing

    International Nuclear Information System (INIS)

    Parks, C.V.; Petrie, L.M.; Bowman, S.M.; Broadhead, B.L.; Greene, N.M.; White, J.E.

    1996-01-01

    The SCAMPI code package consists of a set of SCALE and AMPX modules that have been assembled to facilitate user needs for preparation of problem-specific, multigroup cross-section libraries. The function of each module contained in the SCANTI code package is discussed, along with illustrations of their use in practical analyses. Ideas are presented for future work that can enable one-step processing from a fine-group, problem-independent library to a broad-group, problem-specific library ready for a shielding analysis

  18. SCAMPI: A code package for cross-section processing

    Energy Technology Data Exchange (ETDEWEB)

    Parks, C.V.; Petrie, L.M.; Bowman, S.M.; Broadhead, B.L.; Greene, N.M.; White, J.E.

    1996-04-01

    The SCAMPI code package consists of a set of SCALE and AMPX modules that have been assembled to facilitate user needs for preparation of problem-specific, multigroup cross-section libraries. The function of each module contained in the SCANTI code package is discussed, along with illustrations of their use in practical analyses. Ideas are presented for future work that can enable one-step processing from a fine-group, problem-independent library to a broad-group, problem-specific library ready for a shielding analysis.

  19. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    Science.gov (United States)

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  20. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes (''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Cask,'' R.E. Glass, Sandia National Laboratories, 1985; ''Sample Problem Manual for Benchmarking of Cask Analysis Codes,'' R.E. Glass, Sandia National Laboratories, 1988; ''Standard Thermal Problem Set for the Evaluation of Heat Transfer Codes Used in the Assessment of Transportation Packages, R.E. Glass, et al., Sandia National Laboratories, 1988) used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in ''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks,'' R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem. 6 refs., 5 figs

  1. Development of a subchannel analysis code MATRA (Ver. α)

    International Nuclear Information System (INIS)

    Yoo, Y. J.; Hwang, D. H.

    1998-04-01

    A subchannel analysis code MATRA-α, an interim version of MATRA, has been developed to be run on an IBM PC or HP WS based on the existing CDC CYBER mainframe version of COBRA-IV-I. This MATRA code is a thermal-hydraulic analysis code based on the subchannel approach for calculating the enthalpy and flow distribution in fuel assemblies and reactor cores for both steady-state and transient conditions. MATRA-α has been provided with an improved structure, various functions, and models to give the more convenient user environment and to increase the code accuracy, various functions, and models to give the more convenient user environment and to increase the code accuracy. Among them, the pressure drop model has been improved to be applied to non-square-lattice rod arrays, and the lateral transport models between adjacent subchannels have been improved to increase the accuracy in predicting two-phase flow phenomena. Also included in this report are the detailed instructions for input data preparation and for auxiliary pre-processors to serve as a guide to those who want to use MATRA-α. In addition, we compared the predictions of MATRA-α with the experimental data on the flow and enthalpy distribution in three sample rod-bundle cases to evaluate the performance of MATRA-α. All the results revealed that the prediction of MATRA-α were better than those of COBRA-IV-I. (author). 16 refs., 1 tab., 13 figs

  2. Use of AERIN code for determining internal doses of transuranic isotopes

    International Nuclear Information System (INIS)

    King, W.C.

    1980-01-01

    The AERIN computer code is a mathematical expression of the ICRP Lung Model. The code was developed at the Lawrence Livermore National Laboratory to compute the body organ burdens and absorbed radiation doses resulting from the inhalation of transuranic isotopes and to predict the amount of activity excreted in the urine and feces as a function of time. Over forty cases of internal exposure have been studied using the AERIN code. The code, as modified, has proven to be extremely versatile. The case studies presented demonstrate the excellent correlation that can be obtained between code predictions and observed bioassay data. In one case study a discrepancy was observed between an in vivo count of the whole body and the application of the code using urine and fecal data as input. The discrepancy was resolved by in vivo skull counts that showed the code had predicted the correct skeletal burden

  3. Algebraic solution of the synthesis problem for coded sequences

    International Nuclear Information System (INIS)

    Leukhin, Anatolii N

    2005-01-01

    The algebraic solution of a 'complex' problem of synthesis of phase-coded (PC) sequences with the zero level of side lobes of the cyclic autocorrelation function (ACF) is proposed. It is shown that the solution of the synthesis problem is connected with the existence of difference sets for a given code dimension. The problem of estimating the number of possible code combinations for a given code dimension is solved. It is pointed out that the problem of synthesis of PC sequences is related to the fundamental problems of discrete mathematics and, first of all, to a number of combinatorial problems, which can be solved, as the number factorisation problem, by algebraic methods by using the theory of Galois fields and groups. (fourth seminar to the memory of d.n. klyshko)

  4. Use of GOTHIC Code for Assessment of Equipment Environmental Qualification

    International Nuclear Information System (INIS)

    Cavlina, N.; Feretic, D.; Grgic, D.; Spalj, S.; Spiler, J.

    1996-01-01

    Environmental qualification (EQ) of equipment important to safety in nuclear power plants ensures its capability to perform designated safety function on demand under postulated service conditions, including harsh accident environment (e. g. LOCA, HELB). The computer code GOTHIC was used to calculate pressure and temperature profiles inside NPP Krsko containment during limiting LOCA and MSLB accidents. The results of the new best-estimate containment code are compared to the older CONTEMPT code using the same input data and assumptions. The predictions obtained by both codes are very similar. As a result of the calculation the envelopes of the LOCA and MSLB pressures and temperatures, as used in FSAR/USAR Chapter 6, can be used in EQ project. (author)

  5. OM Code Requirements For MOVs -- OMN-1 and Appendix III

    Energy Technology Data Exchange (ETDEWEB)

    Kevin G. DeWall

    2011-08-01

    The purpose or scope of the ASME OM Code is to establish the requirements for pre-service and in-service testing of nuclear power plant components to assess their operational readiness. For MOVs this includes those that perform a specific function in shutting down a reactor to the safe shutdown condition, maintaining the safe shutdown condition, and mitigating the consequences of an accident. This paper will present a brief history of industry and regulatory activities related to MOVs and the development of Code requirements to address weaknesses in earlier versions of the OM Code. The paper will discuss the MOV requirements contained in the 2009 version of ASME OM Code, specifically Mandatory Appendix III and OMN-1, Revision 1.

  6. OM Code Requirements For MOVs -- OMN-1 and Appendix III

    International Nuclear Information System (INIS)

    DeWall, Kevin G.

    2011-01-01

    The purpose or scope of the ASME OM Code is to establish the requirements for pre-service and in-service testing of nuclear power plant components to assess their operational readiness. For MOVs this includes those that perform a specific function in shutting down a reactor to the safe shutdown condition, maintaining the safe shutdown condition, and mitigating the consequences of an accident. This paper will present a brief history of industry and regulatory activities related to MOVs and the development of Code requirements to address weaknesses in earlier versions of the OM Code. The paper will discuss the MOV requirements contained in the 2009 version of ASME OM Code, specifically Mandatory Appendix III and OMN-1, Revision 1.

  7. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  8. Rateless feedback codes

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  9. A restructuring of CF package for MIDAS computer code

    International Nuclear Information System (INIS)

    Park, S. H.; Kim, K. R.; Kim, D. H.; Cho, S. W.

    2004-01-01

    CF package, which evaluates user-specified 'control functions' and applies them to define or control various aspects of computation, has been restructured for the MIDAS computer code. MIDAS is being developed as an integrated severe accident analysis code with a user-friendly graphical user interface and modernized data structure. To do this, data transferring methods of current MELCOR code are modified and adopted into the CF package. The data structure of the current MELCOR code using FORTRAN77 causes a difficult grasping of meaning of the variables as well as waste of memory, difficulty is more over because its data is location information of other package's data due to characteristics of CF package. New features of FORTRAN90 make it possible to allocate the storage dynamically and to use the user-defined data type, which lead to an efficient memory treatment and an easy understanding of the code. Restructuring of the CF package addressed in this paper includes module development, subroutine modification, and treats MELGEN, which generates data file, as well as MELCOR, which is processing a calculation. The verification has been done by comparing the results of the modified code with those from the existing code. As the trends are similar to each other, it hints that the same approach could be extended to the entire code package. It is expected that code restructuring will accelerate the code domestication thanks to direct understanding of each variable and easy implementation of modified or newly developed models

  10. Novel classes of non-coding RNAs and cancer

    Directory of Open Access Journals (Sweden)

    Sana Jiri

    2012-05-01

    Full Text Available Abstract For the many years, the central dogma of molecular biology has been that RNA functions mainly as an informational intermediate between a DNA sequence and its encoded protein. But one of the great surprises of modern biology was the discovery that protein-coding genes represent less than 2% of the total genome sequence, and subsequently the fact that at least 90% of the human genome is actively transcribed. Thus, the human transcriptome was found to be more complex than a collection of protein-coding genes and their splice variants. Although initially argued to be spurious transcriptional noise or accumulated evolutionary debris arising from the early assembly of genes and/or the insertion of mobile genetic elements, recent evidence suggests that the non-coding RNAs (ncRNAs may play major biological roles in cellular development, physiology and pathologies. NcRNAs could be grouped into two major classes based on the transcript size; small ncRNAs and long ncRNAs. Each of these classes can be further divided, whereas novel subclasses are still being discovered and characterized. Although, in the last years, small ncRNAs called microRNAs were studied most frequently with more than ten thousand hits at PubMed database, recently, evidence has begun to accumulate describing the molecular mechanisms by which a wide range of novel RNA species function, providing insight into their functional roles in cellular biology and in human disease. In this review, we summarize newly discovered classes of ncRNAs, and highlight their functioning in cancer biology and potential usage as biomarkers or therapeutic targets.

  11. High energy particle transport code NMTC/JAM

    International Nuclear Information System (INIS)

    Niita, Koji; Meigo, Shin-ichiro; Takada, Hiroshi; Ikeda, Yujiro

    2001-03-01

    We have developed a high energy particle transport code NMTC/JAM, which is an upgraded version of NMTC/JAERI97. The applicable energy range of NMTC/JAM is extended in principle up to 200 GeV for nucleons and mesons by introducing the high energy nuclear reaction code JAM for the intra-nuclear cascade part. For the evaporation and fission process, we have also implemented a new model, GEM, by which the light nucleus production from the excited residual nucleus can be described. According to the extension of the applicable energy, we have upgraded the nucleon-nucleus non-elastic, elastic and differential elastic cross section data by employing new systematics. In addition, the particle transport in a magnetic field has been implemented for the beam transport calculations. In this upgrade, some new tally functions are added and the format of input of data has been improved very much in a user friendly manner. Due to the implementation of these new calculation functions and utilities, consequently, NMTC/JAM enables us to carry out reliable neutronics study of a large scale target system with complex geometry more accurately and easily than before. This report serves as a user manual of the code. (author)

  12. Long non-coding RNAs and mRNAs profiling during spleen development in pig.

    Science.gov (United States)

    Che, Tiandong; Li, Diyan; Jin, Long; Fu, Yuhua; Liu, Yingkai; Liu, Pengliang; Wang, Yixin; Tang, Qianzi; Ma, Jideng; Wang, Xun; Jiang, Anan; Li, Xuewei; Li, Mingzhou

    2018-01-01

    Genome-wide transcriptomic studies in humans and mice have become extensive and mature. However, a comprehensive and systematic understanding of protein-coding genes and long non-coding RNAs (lncRNAs) expressed during pig spleen development has not been achieved. LncRNAs are known to participate in regulatory networks for an array of biological processes. Here, we constructed 18 RNA libraries from developing fetal pig spleen (55 days before birth), postnatal pig spleens (0, 30, 180 days and 2 years after birth), and the samples from the 2-year-old Wild Boar. A total of 15,040 lncRNA transcripts were identified among these samples. We found that the temporal expression pattern of lncRNAs was more restricted than observed for protein-coding genes. Time-series analysis showed two large modules for protein-coding genes and lncRNAs. The up-regulated module was enriched for genes related to immune and inflammatory function, while the down-regulated module was enriched for cell proliferation processes such as cell division and DNA replication. Co-expression networks indicated the functional relatedness between protein-coding genes and lncRNAs, which were enriched for similar functions over the series of time points examined. We identified numerous differentially expressed protein-coding genes and lncRNAs in all five developmental stages. Notably, ceruloplasmin precursor (CP), a protein-coding gene participating in antioxidant and iron transport processes, was differentially expressed in all stages. This study provides the first catalog of the developing pig spleen, and contributes to a fuller understanding of the molecular mechanisms underpinning mammalian spleen development.

  13. Management-retrieval code system of fission barrier parameter sub-library

    International Nuclear Information System (INIS)

    Zhang Limin; Su Zongdi; Ge Zhigang

    1995-01-01

    The fission barrier parameter (FBP) library, which is a sub-library of Chinese Evaluated Nuclear Parameter library (CENPL), stores various popular used fission barrier parameters from different historical period, and could retrieve the required fission barrier parameters by using the management retrieval code system of the FBP sub-library. The function, feature and operation instruction of the code system are described briefly

  14. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F1--F8 -- Volume 2, Part 1, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Greene, N.M.; Petrie, L.M.; Westfall, R.M.; Bucholz, J.A.; Hermann, O.W.; Fraley, S.K. [Oak Ridge National Lab., TN (United States)

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries.

  15. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F1--F8 -- Volume 2, Part 1, Revision 4

    International Nuclear Information System (INIS)

    Greene, N.M.; Petrie, L.M.; Westfall, R.M.; Bucholz, J.A.; Hermann, O.W.; Fraley, S.K.

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries

  16. New quantum codes derived from a family of antiprimitive BCH codes

    Science.gov (United States)

    Liu, Yang; Li, Ruihu; Lü, Liangdong; Guo, Luobin

    The Bose-Chaudhuri-Hocquenghem (BCH) codes have been studied for more than 57 years and have found wide application in classical communication system and quantum information theory. In this paper, we study the construction of quantum codes from a family of q2-ary BCH codes with length n=q2m+1 (also called antiprimitive BCH codes in the literature), where q≥4 is a power of 2 and m≥2. By a detailed analysis of some useful properties about q2-ary cyclotomic cosets modulo n, Hermitian dual-containing conditions for a family of non-narrow-sense antiprimitive BCH codes are presented, which are similar to those of q2-ary primitive BCH codes. Consequently, via Hermitian Construction, a family of new quantum codes can be derived from these dual-containing BCH codes. Some of these new antiprimitive quantum BCH codes are comparable with those derived from primitive BCH codes.

  17. Surface acoustic wave coding for orthogonal frequency coded devices

    Science.gov (United States)

    Malocha, Donald (Inventor); Kozlovski, Nikolai (Inventor)

    2011-01-01

    Methods and systems for coding SAW OFC devices to mitigate code collisions in a wireless multi-tag system. Each device producing plural stepped frequencies as an OFC signal with a chip offset delay to increase code diversity. A method for assigning a different OCF to each device includes using a matrix based on the number of OFCs needed and the number chips per code, populating each matrix cell with OFC chip, and assigning the codes from the matrix to the devices. The asynchronous passive multi-tag system includes plural surface acoustic wave devices each producing a different OFC signal having the same number of chips and including a chip offset time delay, an algorithm for assigning OFCs to each device, and a transceiver to transmit an interrogation signal and receive OFC signals in response with minimal code collisions during transmission.

  18. Reference manual for the POISSON/SUPERFISH Group of Codes

    Energy Technology Data Exchange (ETDEWEB)

    1987-01-01

    The POISSON/SUPERFISH Group codes were set up to solve two separate problems: the design of magnets and the design of rf cavities in a two-dimensional geometry. The first stage of either problem is to describe the layout of the magnet or cavity in a way that can be used as input to solve the generalized Poisson equation for magnets or the Helmholtz equations for cavities. The computer codes require that the problems be discretized by replacing the differentials (dx,dy) by finite differences ({delta}X,{delta}Y). Instead of defining the function everywhere in a plane, the function is defined only at a finite number of points on a mesh in the plane.

  19. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Burr Alister

    2009-01-01

    Full Text Available Abstract This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are and . The performances of both systems with high ( and low ( BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  20. An RNA-Seq strategy to detect the complete coding and non-coding transcriptome including full-length imprinted macro ncRNAs.

    Directory of Open Access Journals (Sweden)

    Ru Huang

    Full Text Available Imprinted macro non-protein-coding (nc RNAs are cis-repressor transcripts that silence multiple genes in at least three imprinted gene clusters in the mouse genome. Similar macro or long ncRNAs are abundant in the mammalian genome. Here we present the full coding and non-coding transcriptome of two mouse tissues: differentiated ES cells and fetal head using an optimized RNA-Seq strategy. The data produced is highly reproducible in different sequencing locations and is able to detect the full length of imprinted macro ncRNAs such as Airn and Kcnq1ot1, whose length ranges between 80-118 kb. Transcripts show a more uniform read coverage when RNA is fragmented with RNA hydrolysis compared with cDNA fragmentation by shearing. Irrespective of the fragmentation method, all coding and non-coding transcripts longer than 8 kb show a gradual loss of sequencing tags towards the 3' end. Comparisons to published RNA-Seq datasets show that the strategy presented here is more efficient in detecting known functional imprinted macro ncRNAs and also indicate that standardization of RNA preparation protocols would increase the comparability of the transcriptome between different RNA-Seq datasets.

  1. User manual of FUNF code for fissile material data calculation

    International Nuclear Information System (INIS)

    Zhang, Jingshang

    2006-03-01

    The FUNF code (2005 version) is used to calculate fast neutron reaction data of fissile materials with incident energies from about 1 keV up to 20 MeV. The first version of the FUNF code was completed in 1994. the code has been developed continually since that time and has often been used as an evaluation tool for setting up CENDL and for analyzing the measurements of fissile materials. During these years many improvements have been made. In this manual, the format of the input parameter files and the output files, as well as the functions of flag used in FUNF code, are introduced in detail, and the examples of the format of input parameters files are given. FUNF code consists of the spherical optical model, the Hauser-Feshbach model, and the unified Hauser-Feshbach and exciton model. (authors)

  2. Challenges to code status discussions for pediatric patients.

    Directory of Open Access Journals (Sweden)

    Katherine E Kruse

    (p≤0.0001. Attending physicians and trainees perceive families as more receptive to code status discussions than nurses (p<0.0001 and p = 0.0018, respectively.Providers have poor understanding of code status options and differ significantly in their comfort having code status discussions and their perceptions of these discussions. These findings may reflect inherent differences among providers, but may also reflect discordant visions of appropriate care and function as a potential source of moral distress. Lack of knowledge of code status options and differences in provider perceptions are likely barriers to quality communication surrounding end-of-life options.

  3. Dynamic detection technology of malicious code for Android system

    Directory of Open Access Journals (Sweden)

    Li Boya

    2017-02-01

    Full Text Available With the increasing popularization of mobile phones,people's dependence on them is rising,the security problems become more and more prominent.According to the calling of the APK file permission and the API function in Android system,this paper proposes a dynamic detecting method based on API interception technology to detect the malicious code.The experimental results show that this method can effectively detect the malicious code in Android system.

  4. Codes and curves

    CERN Document Server

    Walker, Judy L

    2000-01-01

    When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...

  5. Parallel and vector implementation of APROS simulator code

    International Nuclear Information System (INIS)

    Niemi, J.; Tommiska, J.

    1990-01-01

    In this paper the vector and parallel processing implementation of a general purpose simulator code is discussed. In this code the utilization of vector processing is straightforward. In addition to the loop level parallel processing, the functional decomposition and the domain decomposition have been considered. Results represented for a PWR-plant simulation illustrate the potential speed-up factors of the alternatives. It turns out that the loop level parallelism and the domain decomposition are the most promising alternative to employ the parallel processing. (author)

  6. APC: A New Code for Atmospheric Polarization Computations

    Science.gov (United States)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2014-01-01

    A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.

  7. The nuclear reaction model code MEDICUS

    International Nuclear Information System (INIS)

    Ibishia, A.I.

    2008-01-01

    The new computer code MEDICUS has been used to calculate cross sections of nuclear reactions. The code, implemented in MATLAB 6.5, Mathematica 5, and Fortran 95 programming languages, can be run in graphical and command line mode. Graphical User Interface (GUI) has been built that allows the user to perform calculations and to plot results just by mouse clicking. The MS Windows XP and Red Hat Linux platforms are supported. MEDICUS is a modern nuclear reaction code that can compute charged particle-, photon-, and neutron-induced reactions in the energy range from thresholds to about 200 MeV. The calculation of the cross sections of nuclear reactions are done in the framework of the Exact Many-Body Nuclear Cluster Model (EMBNCM), Direct Nuclear Reactions, Pre-equilibrium Reactions, Optical Model, DWBA, and Exciton Model with Cluster Emission. The code can be used also for the calculation of nuclear cluster structure of nuclei. We have calculated nuclear cluster models for some nuclei such as 177 Lu, 90 Y, and 27 Al. It has been found that nucleus 27 Al can be represented through the two different nuclear cluster models: 25 Mg + d and 24 Na + 3 He. Cross sections in function of energy for the reaction 27 Al( 3 He,x) 22 Na, established as a production method of 22 Na, are calculated by the code MEDICUS. Theoretical calculations of cross sections are in good agreement with experimental results. Reaction mechanisms are taken into account. (author)

  8. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Lei Ye

    2009-01-01

    Full Text Available This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are 1/2 and 1/3. The performances of both systems with high (10−2 and low (10−4 BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  9. Quantum Codes From Cyclic Codes Over The Ring R 2

    International Nuclear Information System (INIS)

    Altinel, Alev; Güzeltepe, Murat

    2016-01-01

    Let R 2 denotes the ring F 2 + μF 2 + υ 2 + μυ F 2 + wF 2 + μwF 2 + υwF 2 + μυwF 2 . In this study, we construct quantum codes from cyclic codes over the ring R 2 , for arbitrary length n, with the restrictions μ 2 = 0, υ 2 = 0, w 2 = 0, μυ = υμ, μw = wμ, υw = wυ and μ (υw) = (μυ) w. Also, we give a necessary and sufficient condition for cyclic codes over R 2 that contains its dual. As a final point, we obtain the parameters of quantum error-correcting codes from cyclic codes over R 2 and we give an example of quantum error-correcting codes form cyclic codes over R 2 . (paper)

  10. Development of LWR fuel performance code FEMAXI-6

    International Nuclear Information System (INIS)

    Suzuki, Motoe

    2006-01-01

    LWR fuel performance code: FEMAXI-6 (Finite Element Method in AXIs-symmetric system) is a representative fuel analysis code in Japan. Development history, background, design idea, features of model, and future are stated. Characteristic performance of LWR fuel and analysis code, what is model, development history of FEMAXI, use of FEMAXI code, fuel model, and a special feature of FEMAXI model is described. As examples of analysis, PCMI (Pellet-Clad Mechanical Interaction), fission gas release, gap bonding, and fission gas bubble swelling are reported. Thermal analysis and dynamic analysis system of FEMAXI-6, function block at one time step of FEMAXI-6, analytical example of PCMI in the output increase test by FEMAXI-III, analysis of fission gas release in Halden reactor by FEMAXI-V, comparison of the center temperature of fuel in Halden reactor, and analysis of change of diameter of fuel rod in high burn up BWR fuel are shown. (S.Y.)

  11. ELCOS: the PSI code system for LWR core analysis. Part II: user`s manual for the fuel assembly code BOXER

    Energy Technology Data Exchange (ETDEWEB)

    Paratte, J.M.; Grimm, P.; Hollard, J.M. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1996-02-01

    ELCOS is a flexible code system for the stationary simulation of light water reactor cores. It consists of the four computer codes ETOBOX, BOXER, CORCOD and SILWER. The user`s manual of the second one is presented here. BOXER calculates the neutronics in cartesian geometry. The code can roughly be divided into four stages: - organisation: choice of the modules, file manipulations, reading and checking of input data, - fine group fluxes and condensation: one-dimensional calculation of fluxes and computation of the group constants of homogeneous materials and cells, - two-dimensional calculations: geometrically detailed simulation of the configuration in few energy groups, - burnup: evolution of the nuclide densities as a function of time. This manual shows all input commands which can be used while running the different modules of BOXER. (author) figs., tabs., refs.

  12. Visual encoding of a QR code using a Gaussian modulating function%利用高斯调变函数视觉编码QR码

    Institute of Scientific and Technical Information of China (English)

    郭兴华; 朱小刚

    2017-01-01

    .Visually appealing codes incorporate high-level visual features,such as colors,letters,illustrations,or logos.Researchers have attempted to endow the QR code with aesthetic elements,and QR code beautification has been formulated as an optimization problem that minimizes visual perception distortion subject to an acceptable decoding rate.However,the visual quality of the QR code generated by existing methods still requires improvement.The key challenge is the lack of proper understanding or analytical formulations capturing the stability of QR codes under variations in lighting,camera specifications,and even perturbations to the QR codes.Patented and ill-documented algorithms employed to read QR codes cause further difficulties.Consequently,existing approaches are mostly ad hoc and often favor readability at the cost of reduced visual quality.Method This work presents an algorithm that visually encodes a QR code by synthesizing the conventional QR code with a theme image.This task is fulfilled by dividing the theme image into equal-sized non-overlapping blocks and modifying the average luminance of each block to its corresponding module type in the QR code by applying the well-designed Gaussian modulating function.In the Gaussian modulating function,standard deviation is dynamically determined according to the smoothness of the corresponding module block.The brightness of the central region of the modified module gradually changes along the circular and presents a smooth appearance and different sizes,which make it consistent with the human visual system.In addition,the size of the module's brightness-sensing region can be adjusted according to application scenarios and the sensitivity of the human eye to different noises.Result In the experimental stage,visually meaningful QR codes are synthesized by setting different parameters,and their correct decoding rate is tested.The optimal parameters are determined to ensure decoding reliability and make the QR code easily recognizable for

  13. A New Prime Code for Synchronous Optical Code Division Multiple-Access Networks

    Directory of Open Access Journals (Sweden)

    Huda Saleh Abbas

    2018-01-01

    Full Text Available A new spreading code based on a prime code for synchronous optical code-division multiple-access networks that can be used in monitoring applications has been proposed. The new code is referred to as “extended grouped new modified prime code.” This new code has the ability to support more terminal devices than other prime codes. In addition, it patches subsequences with “0s” leading to lower power consumption. The proposed code has an improved cross-correlation resulting in enhanced BER performance. The code construction and parameters are provided. The operating performance, using incoherent on-off keying modulation and incoherent pulse position modulation systems, has been analyzed. The performance of the code was compared with other prime codes. The results demonstrate an improved performance, and a BER floor of 10−9 was achieved.

  14. Understanding Mixed Code and Classroom Code-Switching: Myths and Realities

    Science.gov (United States)

    Li, David C. S.

    2008-01-01

    Background: Cantonese-English mixed code is ubiquitous in Hong Kong society, and yet using mixed code is widely perceived as improper. This paper presents evidence of mixed code being socially constructed as bad language behavior. In the education domain, an EDB guideline bans mixed code in the classroom. Teachers are encouraged to stick to…

  15. Development of a coupled code system based on system transient code, RETRAN, and 3-D neutronics code, MASTER

    International Nuclear Information System (INIS)

    Kim, K. D.; Jung, J. J.; Lee, S. W.; Cho, B. O.; Ji, S. K.; Kim, Y. H.; Seong, C. K.

    2002-01-01

    A coupled code system of RETRAN/MASTER has been developed for best-estimate simulations of interactions between reactor core neutron kinetics and plant thermal-hydraulics by incorporation of a 3-D reactor core kinetics analysis code, MASTER into system transient code, RETRAN. The soundness of the consolidated code system is confirmed by simulating the MSLB benchmark problem developed to verify the performance of a coupled kinetics and system transient codes by OECD/NEA

  16. Two phase nonequilibrium heat transfer in the TRAC-PD2 code

    International Nuclear Information System (INIS)

    Mandell, D.A.; Liles, D.R.

    1980-01-01

    TRAC is a best-estimate, multidimensional, nonequilibrium computer code intended for the analysis of loss-of-coolant accidents (LOCA's) in light water reactors. TRAC-PD2 is the third, detailed, pressurized water reactor version of the code. The TRAC code is modular both by components and by function. That is, vessels, pipes, pumps, etc. can be coupled together in any manner in order to simulate a reactor or a particular experimental facility. Individual physical phenomena are also coded in separate subroutines. This paper discusses the wall to fluid heat transfer coefficient correlations, the interfacial heat transfer models, and presents results for several experimental facilities

  17. nocoRNAc: Characterization of non-coding RNAs in prokaryotes

    Directory of Open Access Journals (Sweden)

    Nieselt Kay

    2011-01-01

    Full Text Available Abstract Background The interest in non-coding RNAs (ncRNAs constantly rose during the past few years because of the wide spectrum of biological processes in which they are involved. This led to the discovery of numerous ncRNA genes across many species. However, for most organisms the non-coding transcriptome still remains unexplored to a great extent. Various experimental techniques for the identification of ncRNA transcripts are available, but as these methods are costly and time-consuming, there is a need for computational methods that allow the detection of functional RNAs in complete genomes in order to suggest elements for further experiments. Several programs for the genome-wide prediction of functional RNAs have been developed but most of them predict a genomic locus with no indication whether the element is transcribed or not. Results We present NOCORNAc, a program for the genome-wide prediction of ncRNA transcripts in bacteria. NOCORNAc incorporates various procedures for the detection of transcriptional features which are then integrated with functional ncRNA loci to determine the transcript coordinates. We applied RNAz and NOCORNAc to the genome of Streptomyces coelicolor and detected more than 800 putative ncRNA transcripts most of them located antisense to protein-coding regions. Using a custom design microarray we profiled the expression of about 400 of these elements and found more than 300 to be transcribed, 38 of them are predicted novel ncRNA genes in intergenic regions. The expression patterns of many ncRNAs are similarly complex as those of the protein-coding genes, in particular many antisense ncRNAs show a high expression correlation with their protein-coding partner. Conclusions We have developed NOCORNAc, a framework that facilitates the automated characterization of functional ncRNAs. NOCORNAc increases the confidence of predicted ncRNA loci, especially if they contain transcribed ncRNAs. NOCORNAc is not restricted to

  18. Utility subroutine package used by Applied Physics Division export codes

    International Nuclear Information System (INIS)

    Adams, C.H.; Derstine, K.L.; Henryson, H. II; Hosteny, R.P.; Toppel, B.J.

    1983-04-01

    This report describes the current state of the utility subroutine package used with codes being developed by the staff of the Applied Physics Division. The package provides a variety of useful functions for BCD input processing, dynamic core-storage allocation and managemnt, binary I/0 and data manipulation. The routines were written to conform to coding standards which facilitate the exchange of programs between different computers

  19. Light water reactor fuel analysis code FEMAXI-7. Model and structure [Revised edition

    International Nuclear Information System (INIS)

    Suzuki, Motoe; Udagawa, Yutaka; Amaya, Masaki; Saitou, Hiroaki

    2014-03-01

    A light water reactor fuel analysis code FEMAXI-7 has been developed for the purpose of analyzing the fuel behavior in both normal conditions and anticipated transient conditions. This code is an advanced version which has been produced by incorporating the former version FEMAXI-6 with numerous functional improvements and extensions. In FEMAXI-7, many new models have been added and parameters have been clearly arranged. Also, to facilitate effective maintenance and accessibility of the code, modularization of subroutines and functions have been attained, and quality comment descriptions of variables or physical quantities have been incorporated in the source code. With these advancements, the FEMAXI-7 code has been upgraded to a versatile analytical tool for high burnup fuel behavior analyses. This report is the revised edition of the first one which describes in detail the design, basic theory and structure, models and numerical method, and improvements and extensions. The first edition, JAEA-Data/Code 2010-035, was published in 2010. The first edition was extended by orderly addition and disposition of explanations of models and organized as the revised edition after three years interval. (author)

  20. Some Families of Asymmetric Quantum MDS Codes Constructed from Constacyclic Codes

    Science.gov (United States)

    Huang, Yuanyuan; Chen, Jianzhang; Feng, Chunhui; Chen, Riqing

    2018-02-01

    Quantum maximal-distance-separable (MDS) codes that satisfy quantum Singleton bound with different lengths have been constructed by some researchers. In this paper, seven families of asymmetric quantum MDS codes are constructed by using constacyclic codes. We weaken the case of Hermitian-dual containing codes that can be applied to construct asymmetric quantum MDS codes with parameters [[n,k,dz/dx

  1. Theoretical Atomic Physics code development II: ACE: Another collisional excitation code

    International Nuclear Information System (INIS)

    Clark, R.E.H.; Abdallah, J. Jr.; Csanak, G.; Mann, J.B.; Cowan, R.D.

    1988-12-01

    A new computer code for calculating collisional excitation data (collision strengths or cross sections) using a variety of models is described. The code uses data generated by the Cowan Atomic Structure code or CATS for the atomic structure. Collisional data are placed on a random access file and can be displayed in a variety of formats using the Theoretical Atomic Physics Code or TAPS. All of these codes are part of the Theoretical Atomic Physics code development effort at Los Alamos. 15 refs., 10 figs., 1 tab

  2. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  3. NSPEC - A neutron spectrum code for beam-heated fusion plasmas

    International Nuclear Information System (INIS)

    Scheffel, J.

    1983-06-01

    A 3-dimensional computer code is described, which computes neutron spectra due to beam heating of fusion plasmas. Three types of interactions are considered; thermonuclear of plasma-plasma, beam-plasma and beam-beam interactions. Beam deposition is modelled by the NFREYA code. The applied steady state beam distribution as a function of pitch angle and velocity contains the effects of energy diffusion, friction, angular scattering, charge exchange, electric field and source pitch angle distribution. The neutron spectra, generated by Monte-Carlo methods, are computed with respect to given lines of sight. This enables the code to be used for neutron diagnostics. (author)

  4. CALTRANS: A parallel, deterministic, 3D neutronics code

    Energy Technology Data Exchange (ETDEWEB)

    Carson, L.; Ferguson, J.; Rogers, J.

    1994-04-01

    Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.

  5. Importance biasing scheme implemented in the PRIZMA code

    International Nuclear Information System (INIS)

    Kandiev, I.Z.; Malyshkin, G.N.

    1997-01-01

    PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities

  6. Analysis of functioning and efficiency of a code blue system in a tertiary care hospital.

    Science.gov (United States)

    Monangi, Srinivas; Setlur, Rangraj; Ramanathan, Ramprasad; Bhasin, Sidharth; Dhar, Mridul

    2018-01-01

    "Code blue" (CB) is a popular hospital emergency code, which is used by hospitals to alert their emergency response team of any cardiorespiratory arrest. The factors affecting the outcomes of emergencies are related to both the patient and the nature of the event. The primary objective was to analyze the survival rate and factors associated with survival and also practical problems related to functioning of a CB system (CBS). After the approval of hospital ethics committee, an analysis and audit was conducted of all patients on whom a CB had been called in our tertiary care hospital over 24 months. Data collected were demographic data, diagnosis, time of cardiac arrest and activation of CBS, time taken by CBS to reach the patient, presenting rhythm on arrival of CB team, details of cardiopulmonary resuscitation (CPR) such as duration and drugs given, and finally, events and outcomes. Chi-square test and logistic regression analysis were used to analyze the data. A total of 720 CB calls were initiated during the period. After excluding 24 patients, 694 calls were studied and analyzed. Six hundred and twenty were true calls and 74 were falls calls. Of the 620, 422 were cardiac arrests and 198 were medical emergencies. Overall survival was 26%. Survival in patients with cardiac arrests was 11.13%. Factors such as age, presenting rhythm, and duration of CPR were found to have a significant effect on survival. Problems encountered were personnel and equipment related. A CBS is effective in improving the resuscitation efforts and survival rates after inhospital cardiac arrests. Age, presenting rhythm at the time of arrest, and duration of CPR have significant effect on survival of the patient after a cardiac arrest. Technical and staff-related problems need to be considered and improved upon.

  7. Mentor Texts and the Coding of Academic Writing Structures: A Functional Approach

    Science.gov (United States)

    Escobar Alméciga, Wilder Yesid; Evans, Reid

    2014-01-01

    The purpose of the present pedagogical experience was to address the English language writing needs of university-level students pursuing a degree in bilingual education with an emphasis in the teaching of English. Using mentor texts and coding academic writing structures, an instructional design was developed to directly address the shortcomings…

  8. De novo origin of human protein-coding genes.

    Directory of Open Access Journals (Sweden)

    Dong-Dong Wu

    2011-11-01

    Full Text Available The de novo origin of a new protein-coding gene from non-coding DNA is considered to be a very rare occurrence in genomes. Here we identify 60 new protein-coding genes that originated de novo on the human lineage since divergence from the chimpanzee. The functionality of these genes is supported by both transcriptional and proteomic evidence. RNA-seq data indicate that these genes have their highest expression levels in the cerebral cortex and testes, which might suggest that these genes contribute to phenotypic traits that are unique to humans, such as improved cognitive ability. Our results are inconsistent with the traditional view that the de novo origin of new genes is very rare, thus there should be greater appreciation of the importance of the de novo origination of genes.

  9. De Novo Origin of Human Protein-Coding Genes

    Science.gov (United States)

    Wu, Dong-Dong; Irwin, David M.; Zhang, Ya-Ping

    2011-01-01

    The de novo origin of a new protein-coding gene from non-coding DNA is considered to be a very rare occurrence in genomes. Here we identify 60 new protein-coding genes that originated de novo on the human lineage since divergence from the chimpanzee. The functionality of these genes is supported by both transcriptional and proteomic evidence. RNA–seq data indicate that these genes have their highest expression levels in the cerebral cortex and testes, which might suggest that these genes contribute to phenotypic traits that are unique to humans, such as improved cognitive ability. Our results are inconsistent with the traditional view that the de novo origin of new genes is very rare, thus there should be greater appreciation of the importance of the de novo origination of genes. PMID:22102831

  10. Reaction path of energetic materials using THOR code

    Science.gov (United States)

    Durães, L.; Campos, J.; Portugal, A.

    1998-07-01

    The method of predicting reaction path, using THOR code, allows for isobar and isochor adiabatic combustion and CJ detonation regimes, the calculation of the composition and thermodynamic properties of reaction products of energetic materials. THOR code assumes the thermodynamic equilibria of all possible products, for the minimum Gibbs free energy, using HL EoS. The code allows the possibility of estimating various sets of reaction products, obtained successively by the decomposition of the original reacting compound, as a function of the released energy. Two case studies of thermal decomposition procedure were selected, calculated and discussed—pure Ammonium Nitrate and its based explosive ANFO, and Nitromethane—because their equivalence ratio is respectively lower, near and greater than the stoicheiometry. Predictions of reaction path are in good correlation with experimental values, proving the validity of proposed method.

  11. Error-correction coding and decoding bounds, codes, decoders, analysis and applications

    CERN Document Server

    Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak

    2017-01-01

    This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...

  12. Code compression for VLIW embedded processors

    Science.gov (United States)

    Piccinelli, Emiliano; Sannino, Roberto

    2004-04-01

    The implementation of processors for embedded systems implies various issues: main constraints are cost, power dissipation and die area. On the other side, new terminals perform functions that require more computational flexibility and effort. Long code streams must be loaded into memories, which are expensive and power consuming, to run on DSPs or CPUs. To overcome this issue, the "SlimCode" proprietary algorithm presented in this paper (patent pending technology) can reduce the dimensions of the program memory. It can run offline and work directly on the binary code the compiler generates, by compressing it and creating a new binary file, about 40% smaller than the original one, to be loaded into the program memory of the processor. The decompression unit will be a small ASIC, placed between the Memory Controller and the System bus of the processor, keeping unchanged the internal CPU architecture: this implies that the methodology is completely transparent to the core. We present comparisons versus the state-of-the-art IBM Codepack algorithm, along with its architectural implementation into the ST200 VLIW family core.

  13. Error floor behavior study of LDPC codes for concatenated codes design

    Science.gov (United States)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  14. AECL's advanced code program

    Energy Technology Data Exchange (ETDEWEB)

    McGee, G.; Ball, J. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada)

    2012-07-01

    This paper discusses the advanced code project at AECL.Current suite of Analytical, Scientific and Design (ASD) computer codes in use by Canadian Nuclear Power Industry is mostly developed 20 or more years ago. It is increasingly difficult to develop and maintain. It consist of many independent tools and integrated analysis is difficult, time consuming and error-prone. The objectives of this project is to demonstrate that nuclear facility systems, structures and components meet their design objectives in terms of function, cost, and safety; demonstrate that the nuclear facility meets licensing requirements in terms of consequences of off-normal events; dose to public, workers, impact on environment and demonstrate that the nuclear facility meets operational requirements with respect to on-power fuelling and outage management.

  15. TACO: a finite element heat transfer code

    International Nuclear Information System (INIS)

    Mason, W.E. Jr.

    1980-02-01

    TACO is a two-dimensional implicit finite element code for heat transfer analysis. It can perform both linear and nonlinear analyses and can be used to solve either transient or steady state problems. Either plane or axisymmetric geometries can be analyzed. TACO has the capability to handle time or temperature dependent material properties and materials may be either isotropic or orthotropic. A variety of time and temperature dependent loadings and boundary conditions are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additionally, TACO has some specialized features such as internal surface conditions (e.g., contact resistance), bulk nodes, enclosure radiation with view factor calculations, and chemical reactive kinetics. A user subprogram feature allows for any type of functional representation of any independent variable. A bandwidth and profile minimization option is also available in the code. Graphical representation of data generated by TACO is provided by a companion post-processor named POSTACO. The theory on which TACO is based is outlined, the capabilities of the code are explained, the input data required to perform an analysis with TACO are described. Some simple examples are provided to illustrate the use of the code

  16. Controlling Energy Radiations of Electromagnetic Waves via Frequency Coding Metamaterials.

    Science.gov (United States)

    Wu, Haotian; Liu, Shuo; Wan, Xiang; Zhang, Lei; Wang, Dan; Li, Lianlin; Cui, Tie Jun

    2017-09-01

    Metamaterials are artificial structures composed of subwavelength unit cells to control electromagnetic (EM) waves. The spatial coding representation of metamaterial has the ability to describe the material in a digital way. The spatial coding metamaterials are typically constructed by unit cells that have similar shapes with fixed functionality. Here, the concept of frequency coding metamaterial is proposed, which achieves different controls of EM energy radiations with a fixed spatial coding pattern when the frequency changes. In this case, not only different phase responses of the unit cells are considered, but also different phase sensitivities are also required. Due to different frequency sensitivities of unit cells, two units with the same phase response at the initial frequency may have different phase responses at higher frequency. To describe the frequency coding property of unit cell, digitalized frequency sensitivity is proposed, in which the units are encoded with digits "0" and "1" to represent the low and high phase sensitivities, respectively. By this merit, two degrees of freedom, spatial coding and frequency coding, are obtained to control the EM energy radiations by a new class of frequency-spatial coding metamaterials. The above concepts and physical phenomena are confirmed by numerical simulations and experiments.

  17. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  18. PACC information management code for common cause failures analysis

    International Nuclear Information System (INIS)

    Ortega Prieto, P.; Garcia Gay, J.; Mira McWilliams, J.

    1987-01-01

    The purpose of this paper is to present the PACC code, which, through an adequate data management, makes the task of computerized common-mode failure analysis easier. PACC processes and generates information in order to carry out the corresponding qualitative analysis, by means of the boolean technique of transformation of variables, and the quantitative analysis either using one of several parametric methods or a direct data-base. As far as the qualitative analysis is concerned, the code creates several functional forms for the transformation equations according to the user's choice. These equations are subsequently processed by boolean manipulation codes, such as SETS. The quantitative calculations of the code can be carried out in two different ways: either starting from a common cause data-base, or through parametric methods, such as the Binomial Failure Rate Method, the Basic Parameters Method or the Multiple Greek Letter Method, among others. (orig.)

  19. Development of a new EMP code at LANL

    Science.gov (United States)

    Colman, J. J.; Roussel-Dupré, R. A.; Symbalisty, E. M.; Triplett, L. A.; Travis, B. J.

    2006-05-01

    A new code for modeling the generation of an electromagnetic pulse (EMP) by a nuclear explosion in the atmosphere is being developed. The source of the EMP is the Compton current produced by the prompt radiation (γ-rays, X-rays, and neutrons) of the detonation. As a first step in building a multi- dimensional EMP code we have written three kinetic codes, Plume, Swarm, and Rad. Plume models the transport of energetic electrons in air. The Plume code solves the relativistic Fokker-Planck equation over a specified energy range that can include ~ 3 keV to 50 MeV and computes the resulting electron distribution function at each cell in a two dimensional spatial grid. The energetic electrons are allowed to transport, scatter, and experience Coulombic drag. Swarm models the transport of lower energy electrons in air, spanning 0.005 eV to 30 keV. The swarm code performs a full 2-D solution to the Boltzmann equation for electrons in the presence of an applied electric field. Over this energy range the relevant processes to be tracked are elastic scattering, three body attachment, two body attachment, rotational excitation, vibrational excitation, electronic excitation, and ionization. All of these occur due to collisions between the electrons and neutral bodies in air. The Rad code solves the full radiation transfer equation in the energy range of 1 keV to 100 MeV. It includes effects of photo-absorption, Compton scattering, and pair-production. All of these codes employ a spherical coordinate system in momentum space and a cylindrical coordinate system in configuration space. The "z" axis of the momentum and configuration spaces is assumed to be parallel and we are currently also assuming complete spatial symmetry around the "z" axis. Benchmarking for each of these codes will be discussed as well as the way forward towards an integrated modern EMP code.

  20. CodeArmor : Virtualizing the Code Space to Counter Disclosure Attacks

    NARCIS (Netherlands)

    Chen, Xi; Bos, Herbert; Giuffrida, Cristiano

    2017-01-01

    Code diversification is an effective strategy to prevent modern code-reuse exploits. Unfortunately, diversification techniques are inherently vulnerable to information disclosure. Recent diversification-aware ROP exploits have demonstrated that code disclosure attacks are a realistic threat, with an

  1. An Evaluation of Automated Code Generation with the PetriCode Approach

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge

    2014-01-01

    Automated code generation is an important element of model driven development methodologies. We have previously proposed an approach for code generation based on Coloured Petri Net models annotated with textual pragmatics for the network protocol domain. In this paper, we present and evaluate thr...... important properties of our approach: platform independence, code integratability, and code readability. The evaluation shows that our approach can generate code for a wide range of platforms which is integratable and readable....

  2. Development of Visual CINDER Code with Visual C⧣.NET

    International Nuclear Information System (INIS)

    Kim, Oyeon

    2016-01-01

    CINDER code, CINDER' 90 or CINDER2008 that is integrated with the Monte Carlo code, MCNPX, is widely used to calculate the inventory of nuclides in irradiated materials. The MCNPX code provides decay processes to the particle transport scheme that traditionally only covered prompt processes. The integration schemes serve not only the reactor community (MCNPX burnup) but also the accelerator community as well (residual production information). The big benefit for providing these options lies in the easy cross comparison of the transmutation codes since the calculations are based on exactly the same material, neutron flux and isotope production/destruction inputs. However, it is just frustratingly cumbersome to use. In addition, multiple human interventions may increase the possibility of making errors. The number of significant digits in the input data varies in steps, which may cause big errors for highly nonlinear problems. Thus, it is worthwhile to find a new way to wrap all the codes and procedures in one consistent package which can provide ease of use. The visual CINDER code development is underway with visual C .NET framework. It provides a few benefits for the atomic transmutation simulation with CINDER code. A few interesting and useful properties of visual C .NET framework are introduced. We also showed that the wrapper could make the simulation accurate for highly nonlinear transmutation problems and also increase the possibility of direct combination a radiation transport code MCNPX with CINDER code. Direct combination of CINDER with MCNPX in a wrapper will provide more functionalities for the radiation shielding and prevention study

  3. Development of Visual CINDER Code with Visual C⧣.NET

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Oyeon [Institute for Modeling and Simulation Convergence, Daegu (Korea, Republic of)

    2016-10-15

    CINDER code, CINDER' 90 or CINDER2008 that is integrated with the Monte Carlo code, MCNPX, is widely used to calculate the inventory of nuclides in irradiated materials. The MCNPX code provides decay processes to the particle transport scheme that traditionally only covered prompt processes. The integration schemes serve not only the reactor community (MCNPX burnup) but also the accelerator community as well (residual production information). The big benefit for providing these options lies in the easy cross comparison of the transmutation codes since the calculations are based on exactly the same material, neutron flux and isotope production/destruction inputs. However, it is just frustratingly cumbersome to use. In addition, multiple human interventions may increase the possibility of making errors. The number of significant digits in the input data varies in steps, which may cause big errors for highly nonlinear problems. Thus, it is worthwhile to find a new way to wrap all the codes and procedures in one consistent package which can provide ease of use. The visual CINDER code development is underway with visual C .NET framework. It provides a few benefits for the atomic transmutation simulation with CINDER code. A few interesting and useful properties of visual C .NET framework are introduced. We also showed that the wrapper could make the simulation accurate for highly nonlinear transmutation problems and also increase the possibility of direct combination a radiation transport code MCNPX with CINDER code. Direct combination of CINDER with MCNPX in a wrapper will provide more functionalities for the radiation shielding and prevention study.

  4. TRANSURANUS: A fuel rod analysis code ready for use

    Energy Technology Data Exchange (ETDEWEB)

    Lassmann, K; O` Carroll, C; Van de Laar, J [Commission of the European Communities, Karlsruhe (Germany). European Inst. for Transuranium Elements; Ott, C [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1994-12-31

    The basic concepts of fuel rod performance codes are discussed. The TRANSURANUS code developed at the Institute for Transuranium Elements, Karlsruhe (GE) is presented. It is a quasi two-dimensional (1{sub 1/2}-D) code designed for treatment of a whole fuel rod for any type of reactor and any situation. The fuel rods found in the majority of test- or power reactors can be analyzed for very different situations (normal, off-normal and accidental). The time scale of the problems to be treated may range from milliseconds to years. The TRANSURANUS code consists of a clearly defined mechanical/mathematical framework into which physical models can easily be incorporated. This framework has been extensively tested and the programming very clearly reflects this structure. The code is well structured and easy to understand. It has a comprehensive material data bank for different fuels, claddings, coolants and their properties. The code can be employed in a deterministic and a statistical version. It is written in standard FORTRAN 77. The code system includes: 2 preprocessor programs (MAKROH and AXORDER) for setting up new data cases; the post-processor URPLOT for plotting all important quantities as a function of the radius, the axial coordinate or the time; the post-processor URSTART evaluating statistical analyses. The TRANSURANUS code exhibits short running times. A new WINDOWS-based interactive interface is under development. The code is now in use in various European institutions and is available to all interested parties. 7 figs., 15 refs.

  5. Implementation of Energy Code Controls Requirements in New Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Rosenberg, Michael I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hart, Philip R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hatten, Mike [Solarc Energy Group, LLC, Seattle, WA (United States); Jones, Dennis [Group 14 Engineering, Inc., Denver, CO (United States); Cooper, Matthew [Group 14 Engineering, Inc., Denver, CO (United States)

    2017-03-24

    Most state energy codes in the United States are based on one of two national model codes; ANSI/ASHRAE/IES 90.1 (Standard 90.1) or the International Code Council (ICC) International Energy Conservation Code (IECC). Since 2004, covering the last four cycles of Standard 90.1 updates, about 30% of all new requirements have been related to building controls. These requirements can be difficult to implement and verification is beyond the expertise of most building code officials, yet the assumption in studies that measure the savings from energy codes is that they are implemented and working correctly. The objective of the current research is to evaluate the degree to which high impact controls requirements included in commercial energy codes are properly designed, commissioned and implemented in new buildings. This study also evaluates the degree to which these control requirements are realizing their savings potential. This was done using a three-step process. The first step involved interviewing commissioning agents to get a better understanding of their activities as they relate to energy code required controls measures. The second involved field audits of a sample of commercial buildings to determine whether the code required control measures are being designed, commissioned and correctly implemented and functioning in new buildings. The third step includes compilation and analysis of the information gather during the first two steps. Information gathered during these activities could be valuable to code developers, energy planners, designers, building owners, and building officials.

  6. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  7. Cracking the code: the accuracy of coding shoulder procedures and the repercussions.

    Science.gov (United States)

    Clement, N D; Murray, I R; Nie, Y X; McBirnie, J M

    2013-05-01

    Coding of patients' diagnosis and surgical procedures is subject to error levels of up to 40% with consequences on distribution of resources and financial recompense. Our aim was to explore and address reasons behind coding errors of shoulder diagnosis and surgical procedures and to evaluate a potential solution. A retrospective review of 100 patients who had undergone surgery was carried out. Coding errors were identified and the reasons explored. A coding proforma was designed to address these errors and was prospectively evaluated for 100 patients. The financial implications were also considered. Retrospective analysis revealed the correct primary diagnosis was assigned in 54 patients (54%) had an entirely correct diagnosis, and only 7 (7%) patients had a correct procedure code assigned. Coders identified indistinct clinical notes and poor clarity of procedure codes as reasons for errors. The proforma was significantly more likely to assign the correct diagnosis (odds ratio 18.2, p code (odds ratio 310.0, p coding department. High error levels for coding are due to misinterpretation of notes and ambiguity of procedure codes. This can be addressed by allowing surgeons to assign the diagnosis and procedure using a simplified list that is passed directly to coding.

  8. Code Reuse and Modularity in Python

    Directory of Open Access Journals (Sweden)

    William J. Turkel

    2012-07-01

    Full Text Available Computer programs can become long, unwieldy and confusing without special mechanisms for managing complexity. This lesson will show you how to reuse parts of your code by writing Functions and break your programs into Modules, in order to keep everything concise and easier to debug. Being able to remove a single dysfunctional module can save time and effort.

  9. Installation of Monte Carlo neutron and photon transport code system MCNP4

    International Nuclear Information System (INIS)

    Takano, Makoto; Sasaki, Mikio; Kaneko, Toshiyuki; Yamazaki, Takao.

    1993-03-01

    The continuous energy Monte Carlo code MCNP-4 including its graphic functions has been installed on the Sun-4 sparc-2 work station with minor corrections. In order to validate the installed MCNP-4 code, 25 sample problems have been executed on the work station and these results have been compared with the original ones. And, the most of the graphic functions have been demonstrated by using 3 sample problems. Further, additional 14 nuclides have been included to the continuous cross section library edited from JENDL-3. (author)

  10. What Information is Stored in DNA: Does it Contain Digital Error Correcting Codes?

    Science.gov (United States)

    Liebovitch, Larry

    1998-03-01

    The longest term correlations in living systems are the information stored in DNA which reflects the evolutionary history of an organism. The 4 bases (A,T,G,C) encode sequences of amino acids as well as locations of binding sites for proteins that regulate DNA. The fidelity of this important information is maintained by ANALOG error check mechanisms. When a single strand of DNA is replicated the complementary base is inserted in the new strand. Sometimes the wrong base is inserted that sticks out disrupting the phosphate backbone. The new base is not yet methylated, so repair enzymes, that slide along the DNA, can tear out the wrong base and replace it with the right one. The bases in DNA form a sequence of 4 different symbols and so the information is encoded in a DIGITAL form. All the digital codes in our society (ISBN book numbers, UPC product codes, bank account numbers, airline ticket numbers) use error checking code, where some digits are functions of other digits to maintain the fidelity of transmitted informaiton. Does DNA also utitlize a DIGITAL error chekcing code to maintain the fidelity of its information and increase the accuracy of replication? That is, are some bases in DNA functions of other bases upstream or downstream? This raises the interesting mathematical problem: How does one determine whether some symbols in a sequence of symbols are a function of other symbols. It also bears on the issue of determining algorithmic complexity: What is the function that generates the shortest algorithm for reproducing the symbol sequence. The error checking codes most used in our technology are linear block codes. We developed an efficient method to test for the presence of such codes in DNA. We coded the 4 bases as (0,1,2,3) and used Gaussian elimination, modified for modulus 4, to test if some bases are linear combinations of other bases. We used this method to analyze the base sequence in the genes from the lac operon and cytochrome C. We did not find

  11. Engineering Ethics Education Having Reflected Various Values and a Global Code of Ethics

    Science.gov (United States)

    Kanemitsu, Hidekazu

    At the present day, a movement trying to establish a global code of ethics for science and engineering is in activity. The author overviews the context of this movement, and examines the possibility of engineering ethics education which uses global code of ethics. In this paper, the engineering ethics education which uses code of ethics in general will be considered, and an expected function of global code of ethics will be also. Engineering ethics education in the new century should be aimed to share the values among different countries and cultures. To use global code of ethics as a tool for such education, the code should include various values, especially Asian values which engineering ethics has paid little attention to.

  12. Intrabeam Scattering Studies for the ILC Damping Rings Using a NewMATLAB Code

    Energy Technology Data Exchange (ETDEWEB)

    Reichel, I.; Wolski, A.

    2006-06-21

    A new code to calculate the effects of intrabeam scattering (IBS) has been developed in MATLAB based on the approximation suggested by K. Bane. It interfaces with the Accelerator Toolbox but can also read in lattice functions from other codes. The code has been benchmarked against results from other codes for the ATF that use this approximation or do the calculation in a different way. The new code has been used to calculate the emittance growth due to intrabeam scattering for the lattices currently proposed for the ILC Damping Rings, as IBS is a concern, especially for the electron ring. A description of the code and its user interface, as well as results for the Damping Rings, will be presented.

  13. Intrabeam Scattering Studies for the ILC Damping Rings Using a New MATLAB Code

    International Nuclear Information System (INIS)

    Reichel, I.; Wolski, A.

    2006-01-01

    A new code to calculate the effects of intrabeam scattering (IBS) has been developed in MATLAB based on the approximation suggested by K. Bane. It interfaces with the Accelerator Toolbox but can also read in lattice functions from other codes. The code has been benchmarked against results from other codes for the ATF that use this approximation or do the calculation in a different way. The new code has been used to calculate the emittance growth due to intrabeam scattering for the lattices currently proposed for the ILC Damping Rings, as IBS is a concern, especially for the electron ring. A description of the code and its user interface, as well as results for the Damping Rings, will be presented

  14. Distinguishing stimulus and response codes in theta oscillations in prefrontal areas during inhibitory control of automated responses.

    Science.gov (United States)

    Mückschel, Moritz; Dippel, Gabriel; Beste, Christian

    2017-11-01

    Response inhibition mechanisms are mediated via cortical and subcortical networks. At the cortical level, the superior frontal gyrus, including the supplementary motor area (SMA) and inferior frontal areas, is important. There is an ongoing debate about the functional roles of these structures during response inhibition as it is unclear whether these structures process different codes or contents of information during response inhibition. In the current study, we examined this question with a focus on theta frequency oscillations during response inhibition processes. We used a standard Go/Nogo task in a sample of human participants and combined different EEG signal decomposition methods with EEG beamforming approaches. The results suggest that stimulus coding during inhibitory control is attained by oscillations in the upper theta frequency band (∼7 Hz). In contrast, response selection codes during inhibitory control appear to be attained by the lower theta frequency band (∼4 Hz). Importantly, these different codes seem to be processed in distinct functional neuroanatomical structures. Although the SMA may process stimulus codes and response selection codes, the inferior frontal cortex may selectively process response selection codes during inhibitory control. Taken together, the results suggest that different entities within the functional neuroanatomical network associated with response inhibition mechanisms process different kinds of codes during inhibitory control. These codes seem to be reflected by different oscillations within the theta frequency band. Hum Brain Mapp 38:5681-5690, 2017. © 2017 Wiley-Liss, Inc. © 2017 Wiley Periodicals, Inc.

  15. Construction of new quantum MDS codes derived from constacyclic codes

    Science.gov (United States)

    Taneja, Divya; Gupta, Manish; Narula, Rajesh; Bhullar, Jaskaran

    Obtaining quantum maximum distance separable (MDS) codes from dual containing classical constacyclic codes using Hermitian construction have paved a path to undertake the challenges related to such constructions. Using the same technique, some new parameters of quantum MDS codes have been constructed here. One set of parameters obtained in this paper has achieved much larger distance than work done earlier. The remaining constructed parameters of quantum MDS codes have large minimum distance and were not explored yet.

  16. Code of ethics as a tool for resolving conflict in the organization

    Directory of Open Access Journals (Sweden)

    Prokopenko O.

    2016-02-01

    Full Text Available This article addresses the issues as selection tools to resolve conflicts in organizations because of the importance and topicality of this issue. One such tool among these is the effective functioning of the organization code of ethics. This document, the more detailed, the more effective. Nowadays, more and more organizations have their own codes of ethics. Therefore, it would be wrong underestimation of the code of ethics as a tool that could be used to resolve conflicts in organizations.

  17. Development of CAP code for nuclear power plant containment: Lumped model

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Soon Joon, E-mail: sjhong90@fnctech.com [FNC Tech. Co. Ltd., Heungdeok 1 ro 13, Giheung-gu, Yongin-si, Gyeonggi-do 446-908 (Korea, Republic of); Choo, Yeon Joon; Hwang, Su Hyun; Lee, Byung Chul [FNC Tech. Co. Ltd., Heungdeok 1 ro 13, Giheung-gu, Yongin-si, Gyeonggi-do 446-908 (Korea, Republic of); Ha, Sang Jun [Central Research Institute, Korea Hydro & Nuclear Power Company, Ltd., 70, 1312-gil, Yuseong-daero, Yuseong-gu, Daejeon 305-343 (Korea, Republic of)

    2015-09-15

    Highlights: • State-of-art containment analysis code, CAP, has been developed. • CAP uses 3-field equations, water level oriented upwind scheme, local head model. • CAP has a function of linked calculation with reactor coolant system code. • CAP code assessments showed appropriate prediction capabilities. - Abstract: CAP (nuclear Containment Analysis Package) code has been developed in Korean nuclear society for the analysis of nuclear containment thermal hydraulic behaviors including pressure and temperature trends and hydrogen concentration. Lumped model of CAP code uses 2-phase, 3-field equations for fluid behaviors, and has appropriate constitutive equations, 1-dimensional heat conductor model, component models, trip and control models, and special process models. CAP can run in a standalone mode or a linked mode with a reactor coolant system analysis code. The linked mode enables the more realistic calculation of a containment response and is expected to be applicable to a more complicated advanced plant design calculation. CAP code assessments were carried out by gradual approaches: conceptual problems, fundamental phenomena, component and principal phenomena, experimental validation, and finally comparison with other code calculations on the base of important phenomena identifications. The assessments showed appropriate prediction capabilities of CAP.

  18. Development of CAP code for nuclear power plant containment: Lumped model

    International Nuclear Information System (INIS)

    Hong, Soon Joon; Choo, Yeon Joon; Hwang, Su Hyun; Lee, Byung Chul; Ha, Sang Jun

    2015-01-01

    Highlights: • State-of-art containment analysis code, CAP, has been developed. • CAP uses 3-field equations, water level oriented upwind scheme, local head model. • CAP has a function of linked calculation with reactor coolant system code. • CAP code assessments showed appropriate prediction capabilities. - Abstract: CAP (nuclear Containment Analysis Package) code has been developed in Korean nuclear society for the analysis of nuclear containment thermal hydraulic behaviors including pressure and temperature trends and hydrogen concentration. Lumped model of CAP code uses 2-phase, 3-field equations for fluid behaviors, and has appropriate constitutive equations, 1-dimensional heat conductor model, component models, trip and control models, and special process models. CAP can run in a standalone mode or a linked mode with a reactor coolant system analysis code. The linked mode enables the more realistic calculation of a containment response and is expected to be applicable to a more complicated advanced plant design calculation. CAP code assessments were carried out by gradual approaches: conceptual problems, fundamental phenomena, component and principal phenomena, experimental validation, and finally comparison with other code calculations on the base of important phenomena identifications. The assessments showed appropriate prediction capabilities of CAP

  19. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  20. Code-Mixing and Code Switchingin The Process of Learning

    Directory of Open Access Journals (Sweden)

    Diyah Atiek Mustikawati

    2016-09-01

    Full Text Available This study aimed to describe a form of code switching and code mixing specific form found in the teaching and learning activities in the classroom as well as determining factors influencing events stand out that form of code switching and code mixing in question.Form of this research is descriptive qualitative case study which took place in Al Mawaddah Boarding School Ponorogo. Based on the analysis and discussion that has been stated in the previous chapter that the form of code mixing and code switching learning activities in Al Mawaddah Boarding School is in between the use of either language Java language, Arabic, English and Indonesian, on the use of insertion of words, phrases, idioms, use of nouns, adjectives, clauses, and sentences. Code mixing deciding factor in the learning process include: Identification of the role, the desire to explain and interpret, sourced from the original language and its variations, is sourced from a foreign language. While deciding factor in the learning process of code, includes: speakers (O1, partners speakers (O2, the presence of a third person (O3, the topic of conversation, evoke a sense of humour, and just prestige. The significance of this study is to allow readers to see the use of language in a multilingual society, especially in AL Mawaddah boarding school about the rules and characteristics variation in the language of teaching and learning activities in the classroom. Furthermore, the results of this research will provide input to the ustadz / ustadzah and students in developing oral communication skills and the effectiveness of teaching and learning strategies in boarding schools.

  1. Distributed coding/decoding complexity in video sensor networks.

    Science.gov (United States)

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  2. Using Coding Apps to Support Literacy Instruction and Develop Coding Literacy

    Science.gov (United States)

    Hutchison, Amy; Nadolny, Larysa; Estapa, Anne

    2016-01-01

    In this article the authors present the concept of Coding Literacy and describe the ways in which coding apps can support the development of Coding Literacy and disciplinary and digital literacy skills. Through detailed examples, we describe how coding apps can be integrated into literacy instruction to support learning of the Common Core English…

  3. Low Complexity List Decoding for Polar Codes with Multiple CRC Codes

    Directory of Open Access Journals (Sweden)

    Jong-Hwan Kim

    2017-04-01

    Full Text Available Polar codes are the first family of error correcting codes that provably achieve the capacity of symmetric binary-input discrete memoryless channels with low complexity. Since the development of polar codes, there have been many studies to improve their finite-length performance. As a result, polar codes are now adopted as a channel code for the control channel of 5G new radio of the 3rd generation partnership project. However, the decoder implementation is one of the big practical problems and low complexity decoding has been studied. This paper addresses a low complexity successive cancellation list decoding for polar codes utilizing multiple cyclic redundancy check (CRC codes. While some research uses multiple CRC codes to reduce memory and time complexity, we consider the operational complexity of decoding, and reduce it by optimizing CRC positions in combination with a modified decoding operation. Resultingly, the proposed scheme obtains not only complexity reduction from early stopping of decoding, but also additional reduction from the reduced number of decoding paths.

  4. Majorana fermion codes

    International Nuclear Information System (INIS)

    Bravyi, Sergey; Terhal, Barbara M; Leemhuis, Bernhard

    2010-01-01

    We initiate the study of Majorana fermion codes (MFCs). These codes can be viewed as extensions of Kitaev's one-dimensional (1D) model of unpaired Majorana fermions in quantum wires to higher spatial dimensions and interacting fermions. The purpose of MFCs is to protect quantum information against low-weight fermionic errors, that is, operators acting on sufficiently small subsets of fermionic modes. We examine to what extent MFCs can surpass qubit stabilizer codes in terms of their stability properties. A general construction of 2D MFCs is proposed that combines topological protection based on a macroscopic code distance with protection based on fermionic parity conservation. Finally, we use MFCs to show how to transform any qubit stabilizer code to a weakly self-dual CSS code.

  5. DISP1 code

    International Nuclear Information System (INIS)

    Vokac, P.

    1999-12-01

    DISP1 code is a simple tool for assessment of the dispersion of the fission product cloud escaping from a nuclear power plant after an accident. The code makes it possible to tentatively check the feasibility of calculations by more complex PSA3 codes and/or codes for real-time dispersion calculations. The number of input parameters is reasonably low and the user interface is simple enough to allow a rapid processing of sensitivity analyses. All input data entered through the user interface are stored in the text format. Implementation of dispersion model corrections taken from the ARCON96 code enables the DISP1 code to be employed for assessment of the radiation hazard within the NPP area, in the control room for instance. (P.A.)

  6. TrueGrid: Code the table, tabulate the data

    NARCIS (Netherlands)

    F. Hermans (Felienne); T. van der Storm (Tijs)

    2016-01-01

    textabstractSpreadsheet systems are live programming environments. Both the data and the code are right in front you, and if you edit either of them, the effects are immediately visible. Unfortunately, spreadsheets lack mechanisms for abstraction, such as classes, function definitions etc.

  7. Two codes used in analysis of rod ejection accident for Qinshan Nuclear Power Plant

    International Nuclear Information System (INIS)

    Zhu Xinguan

    1987-12-01

    Two codes were developed to analyse rod ejection accident for Qinshan Nuclear Power Plant. One was based on point model with temperature reactivity feedback. In this code, the worth of ejected rod was obtained under'adiabatic' approximation. In the other code, the Nodal Green's Function Method was used to solve space-time dependent neutron diffusion equation. Using these codes, the transient core-power have been calculated for two rod ejection cases at beginning of core-life in Qinshan Nuclear Power Plant

  8. Symmetries in Genetic Systems and the Concept of Geno-Logical Coding

    Directory of Open Access Journals (Sweden)

    Sergey V. Petoukhov

    2016-12-01

    Full Text Available The genetic code of amino acid sequences in proteins does not allow understanding and modeling of inherited processes such as inborn coordinated motions of living bodies, innate principles of sensory information processing, quasi-holographic properties, etc. To be able to model these phenomena, the concept of geno-logical coding, which is connected with logical functions and Boolean algebra, is put forward. The article describes basic pieces of evidence in favor of the existence of the geno-logical code, which exists in p­arallel with the known genetic code of amino acid sequences but which serves for transferring inherited processes along chains of generations. These pieces of evidence have been received due to the analysis of symmetries in structures of molecular-genetic systems. The analysis has revealed a close connection of the genetic system with dyadic groups of binary numbers and with other mathematical objects, which are related with dyadic groups: Walsh functions (which are algebraic characters of dyadic groups, bit-reversal permutations, logical holography, etc. These results provide a new approach for mathematical modeling of genetic structures, which uses known mathematical formalisms from technological fields of noise-immunity coding of information, binary analysis, logical holography, and digital devices of artificial intellect. Some opportunities for a development of algebraic-logical biology are opened.

  9. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  10. SWIMS: a small-angle multiple scattering computer code

    International Nuclear Information System (INIS)

    Sayer, R.O.

    1976-07-01

    SWIMS (Sigmund and WInterbon Multiple Scattering) is a computer code for calculation of the angular dispersion of ion beams that undergo small-angle, incoherent multiple scattering by gaseous or solid media. The code uses the tabulated angular distributions of Sigmund and Winterbon for a Thomas-Fermi screened Coulomb potential. The fraction of the incident beam scattered into a cone defined by the polar angle α is computed as a function of α for reduced thicknesses over the range 0.01 less than or equal to tau less than or equal to 10.0. 1 figure, 2 tables

  11. 78 FR 18321 - International Code Council: The Update Process for the International Codes and Standards

    Science.gov (United States)

    2013-03-26

    ... Energy Conservation Code. International Existing Building Code. International Fire Code. International... Code. International Property Maintenance Code. International Residential Code. International Swimming Pool and Spa Code International Wildland-Urban Interface Code. International Zoning Code. ICC Standards...

  12. Mathematical Formulation used by MATLAB Code to Convert FTIR Interferograms to Calibrated Spectra

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, Derek Elswick [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-19

    This report discusses the mathematical procedures used to convert raw interferograms from Fourier transform infrared (FTIR) sensors to calibrated spectra. The work discussed in this report was completed as part of the Helios project at Los Alamos National Laboratory. MATLAB code was developed to convert the raw interferograms to calibrated spectra. The report summarizes the developed MATLAB scripts and functions, along with a description of the mathematical methods used by the code. The first step in working with raw interferograms is to convert them to uncalibrated spectra by applying an apodization function to the raw data and then by performing a Fourier transform. The developed MATLAB code also addresses phase error correction by applying the Mertz method. This report provides documentation for the MATLAB scripts.

  13. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  14. The network code

    International Nuclear Information System (INIS)

    1997-01-01

    The Network Code defines the rights and responsibilities of all users of the natural gas transportation system in the liberalised gas industry in the United Kingdom. This report describes the operation of the Code, what it means, how it works and its implications for the various participants in the industry. The topics covered are: development of the competitive gas market in the UK; key points in the Code; gas transportation charging; impact of the Code on producers upstream; impact on shippers; gas storage; supply point administration; impact of the Code on end users; the future. (20 tables; 33 figures) (UK)

  15. XSOR codes users manual

    International Nuclear Information System (INIS)

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  16. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    Science.gov (United States)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  17. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  18. Development of Parallel Computing Framework to Enhance Radiation Transport Code Capabilities for Rare Isotope Beam Facility Design

    Energy Technology Data Exchange (ETDEWEB)

    Kostin, Mikhail [Michigan State Univ., East Lansing, MI (United States); Mokhov, Nikolai [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Niita, Koji [Research Organization for Information Science and Technology, Ibaraki-ken (Japan)

    2013-09-25

    A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA and MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.

  19. The origins and evolutionary history of human non-coding RNA regulatory networks.

    Science.gov (United States)

    Sherafatian, Masih; Mowla, Seyed Javad

    2017-04-01

    The evolutionary history and origin of the regulatory function of animal non-coding RNAs are not well understood. Lack of conservation of long non-coding RNAs and small sizes of microRNAs has been major obstacles in their phylogenetic analysis. In this study, we tried to shed more light on the evolution of ncRNA regulatory networks by changing our phylogenetic strategy to focus on the evolutionary pattern of their protein coding targets. We used available target databases of miRNAs and lncRNAs to find their protein coding targets in human. We were able to recognize evolutionary hallmarks of ncRNA targets by phylostratigraphic analysis. We found the conventional 3'-UTR and lesser known 5'-UTR targets of miRNAs to be enriched at three consecutive phylostrata. Firstly, in eukaryata phylostratum corresponding to the emergence of miRNAs, our study revealed that miRNA targets function primarily in cell cycle processes. Moreover, the same overrepresentation of the targets observed in the next two consecutive phylostrata, opisthokonta and eumetazoa, corresponded to the expansion periods of miRNAs in animals evolution. Coding sequence targets of miRNAs showed a delayed rise at opisthokonta phylostratum, compared to the 3' and 5' UTR targets of miRNAs. LncRNA regulatory network was the latest to evolve at eumetazoa.

  20. Coding in Muscle Disease.

    Science.gov (United States)

    Jones, Lyell K; Ney, John P

    2016-12-01

    Accurate coding is critically important for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of administrative coding for patients with muscle disease and includes a case-based review of diagnostic and Evaluation and Management (E/M) coding principles in patients with myopathy. Procedural coding for electrodiagnostic studies and neuromuscular ultrasound is also reviewed.

  1. Generalized concatenated quantum codes

    International Nuclear Information System (INIS)

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei

    2009-01-01

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  2. Synthesizing Certified Code

    OpenAIRE

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach for formally demonstrating software quality. Its basic idea is to require code producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates that can be checked independently. Since code certification uses the same underlying technology as program verification, it requires detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding annotations to th...

  3. GYSELA, a full-f global gyrokinetic Semi-Lagrangian code for ITG turbulence simulations

    International Nuclear Information System (INIS)

    Grandgirard, V.; Sarazin, Y.; Garbet, X.; Dif-Pradalier, G.; Ghendrih, Ph.; Crouseilles, N.; Latu, G.; Sonnendruecker, E.; Besse, N.; Bertrand, P.

    2006-01-01

    This work addresses non-linear global gyrokinetic simulations of ion temperature gradient (ITG) driven turbulence with the GYSELA code. The particularity of GYSELA code is to use a fixed grid with a Semi-Lagrangian (SL) scheme and this for the entire distribution function. The 4D non-linear drift-kinetic version of the code already showns the interest of such a SL method which exhibits good properties of energy conservation in non-linear regime as well as an accurate description of fine spatial scales. The code has been upgrated to run 5D simulations of toroidal ITG turbulence. Linear benchmarks and non-linear first results prove that semi-lagrangian codes can be a credible alternative for gyrokinetic simulations

  4. Extending the capabilities of CFD codes to assess ash related problems

    DEFF Research Database (Denmark)

    Kær, Søren Knudsen; Rosendahl, Lasse Aistrup; Baxter, B. B.

    2004-01-01

    This paper discusses the application of FLUENT? in theanalysis of grate-fired biomass boilers. A short description of theconcept used to model fuel conversion on the grate and the couplingto the CFD code is offered. The development and implementation ofa CFD-based deposition model is presented...... in the reminder of thepaper. The growth of deposits on furnace walls and super heatertubes is treated including the impact on heat transfer rates determinedby the CFD code. Based on the commercial CFD code FLUENT?,the overall model is fully implemented through the User DefinedFunctions. The model is configured...

  5. Coding Labour

    Directory of Open Access Journals (Sweden)

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  6. Stochasticity in Ca2+ increase in spines enables robust and sensitive information coding.

    Directory of Open Access Journals (Sweden)

    Takuya Koumura

    Full Text Available A dendritic spine is a very small structure (∼0.1 µm3 of a neuron that processes input timing information. Why are spines so small? Here, we provide functional reasons; the size of spines is optimal for information coding. Spines code input timing information by the probability of Ca2+ increases, which makes robust and sensitive information coding possible. We created a stochastic simulation model of input timing-dependent Ca2+ increases in a cerebellar Purkinje cell's spine. Spines used probability coding of Ca2+ increases rather than amplitude coding for input timing detection via stochastic facilitation by utilizing the small number of molecules in a spine volume, where information per volume appeared optimal. Probability coding of Ca2+ increases in a spine volume was more robust against input fluctuation and more sensitive to input numbers than amplitude coding of Ca2+ increases in a cell volume. Thus, stochasticity is a strategy by which neurons robustly and sensitively code information.

  7. Interface requirements to couple thermal-hydraulic codes to 3D neutronic codes

    Energy Technology Data Exchange (ETDEWEB)

    Langenbuch, S.; Austregesilo, H.; Velkov, K. [GRS, Garching (Germany)] [and others

    1997-07-01

    The present situation of thermalhydraulics codes and 3D neutronics codes is briefly described and general considerations for coupling of these codes are discussed. Two different basic approaches of coupling are identified and their relative advantages and disadvantages are discussed. The implementation of the coupling for 3D neutronics codes in the system ATHLET is presented. Meanwhile, this interface is used for coupling three different 3D neutronics codes.

  8. Interface requirements to couple thermal-hydraulic codes to 3D neutronic codes

    International Nuclear Information System (INIS)

    Langenbuch, S.; Austregesilo, H.; Velkov, K.

    1997-01-01

    The present situation of thermalhydraulics codes and 3D neutronics codes is briefly described and general considerations for coupling of these codes are discussed. Two different basic approaches of coupling are identified and their relative advantages and disadvantages are discussed. The implementation of the coupling for 3D neutronics codes in the system ATHLET is presented. Meanwhile, this interface is used for coupling three different 3D neutronics codes

  9. Demonstration study on shielding safety analysis code (8)

    Energy Technology Data Exchange (ETDEWEB)

    Sawamura, Sadashi [Hokkaido Univ., Sapporo (Japan)

    2001-03-01

    Dose evaluation for direct radiation and skyshine from nuclear fuel facilities is one of the environment evaluation items. This evaluation is carried out by using some shielding calculation codes. Because of extremely few benchmark data of skyshine, the calculation has to be performed very conservatively. Therefore, the benchmark data of skyshine and the well-investigated code for skyshine would be necessary to carry out the rational evaluation of nuclear facilities. The purpose of this study is to obtain the benchmark data of skyshine and to investigate the calculation code for skyshine. In this fiscal year, the followings are investigated. (1) A {sup 3}He detector and some instruments are added to the former detection system to increase the detection sensitivity in pulsed neutron measurements. Using the new detection system, the skyshine of neutrons from 45 MeV LINAC facility are measured in the distance up to 350 m. (2) To estimate the spectrum of leakage neutron from the facility, {sup 3}He detector with moderators is constructed and the response functions of the detector are calculated using the MCNP simulation code. The leakage spectrum in the facility are measured and unfolded using the SAND-II code. (3) Using the EGS code and/or MCNP code, neutron yields by the photo-nuclear reaction in the lead target are calculated. Then, the neutron fluence at some points including the duct (from which neutrons leaks and is considered to be a skyshine source) is simulated by MCNP MONTE CARLO code. (4) In the distance up to 350 m from the facility, neutron fluence due to the skyshine process are calculated and compared with the experimental results. The comparison gives a fairly good agreement. (author)

  10. Assessment of the SPACE Code Using the ATLAS SLB-GB-01 Test

    International Nuclear Information System (INIS)

    Kim, Yo Han; Yang, Chang Keun; Kim, Seyun

    2013-01-01

    The Korea Nuclear Hydro and Nuclear Power Co. (KHNP) has developed a safety analysis code, called the Safety and Performance Analysis Code for Nuclear Power Plants (SPACE) by collaborative works with other Korean nuclear industries. The SPACE is a general-purpose best-estimated two-phase three-field thermal-hydraulic analysis code to analyze the safety and performance of pressurized water reactors (PWRs). The SPACE code has sufficient functions and capabilities to replace outdated vendor supplied codes and to be used for the safety analysis of operating PWRs and the design of advanced reactors. As a result of the second phase of the SPACE code development project, the 2.14 version of the code was released through the successive various V and V works using integral loop test data or plant operating data. In this study, the ATLAS main steam-line break (MSLB) test, SLB-GB-01, was simulated as a V and V work. The results were compared with the measured data. The ATALS MSLB test, SLB-GB-01, was simulated using the SPACE code. The results were compared with experimental data. Through the simulation, it was concluded that the SPACE code can effectively simulate MSLB accidents

  11. Synthesizing Certified Code

    Science.gov (United States)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  12. Report number codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.N. (ed.)

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  13. Report number codes

    International Nuclear Information System (INIS)

    Nelson, R.N.

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name

  14. Performance Analysis of CRC Codes for Systematic and Nonsystematic Polar Codes with List Decoding

    Directory of Open Access Journals (Sweden)

    Takumi Murata

    2018-01-01

    Full Text Available Successive cancellation list (SCL decoding of polar codes is an effective approach that can significantly outperform the original successive cancellation (SC decoding, provided that proper cyclic redundancy-check (CRC codes are employed at the stage of candidate selection. Previous studies on CRC-assisted polar codes mostly focus on improvement of the decoding algorithms as well as their implementation, and little attention has been paid to the CRC code structure itself. For the CRC-concatenated polar codes with CRC code as their outer code, the use of longer CRC code leads to reduction of information rate, whereas the use of shorter CRC code may reduce the error detection probability, thus degrading the frame error rate (FER performance. Therefore, CRC codes of proper length should be employed in order to optimize the FER performance for a given signal-to-noise ratio (SNR per information bit. In this paper, we investigate the effect of CRC codes on the FER performance of polar codes with list decoding in terms of the CRC code length as well as its generator polynomials. Both the original nonsystematic and systematic polar codes are considered, and we also demonstrate that different behaviors of CRC codes should be observed depending on whether the inner polar code is systematic or not.

  15. Nuclear data libraries for Tripoli-3.5 code; Bibliotheques de donnees nucleaires pour le code tripoli-3.5

    Energy Technology Data Exchange (ETDEWEB)

    Vergnaud, Th

    2001-07-01

    The TRIPOLI-3 code uses multigroup nuclear data libraries generated using the NJOY-THEMIS suite of modules: for neutrons, they are produced from the ENDF/B-VI evaluations and cover the range between 20 MeV and 10{sup -5} eV, either in 315 groups and for one temperature, or in 3209 groups and for five temperatures; for gamma-rays, they are from JEF2 and are processed in groups between 14 MeV and keV. The probability tables used for the neutron transport calculations have been derived from the ENDF/B-VI evaluations using the CALENDF code. Cross sections for gamma production by neutron interaction (fission, capture or inelastic scattering) have been derived from ENDF/B-VI in 315 neutron groups and 75 gamma groups. The code also uses two response function libraries: for neutrons; based on several sources, in particular the dosimetry libraries IRDF/85 and IRDF/90; for gamma-rays it is based on the JEF2 evaluation and contains the kerma factors for all the elements and cross sections for all interactions. (author)

  16. Z₂-double cyclic codes

    OpenAIRE

    Borges, J.

    2014-01-01

    A binary linear code C is a Z2-double cyclic code if the set of coordinates can be partitioned into two subsets such that any cyclic shift of the coordinates of both subsets leaves invariant the code. These codes can be identified as submodules of the Z2[x]-module Z2[x]/(x^r − 1) × Z2[x]/(x^s − 1). We determine the structure of Z2-double cyclic codes giving the generator polynomials of these codes. The related polynomial representation of Z2-double cyclic codes and its duals, and the relation...

  17. Iterative Decoding of Concatenated Codes: A Tutorial

    Directory of Open Access Journals (Sweden)

    Phillip A. Regalia

    2005-05-01

    Full Text Available The turbo decoding algorithm of a decade ago constituted a milestone in error-correction coding for digital communications, and has inspired extensions to generalized receiver topologies, including turbo equalization, turbo synchronization, and turbo CDMA, among others. Despite an accrued understanding of iterative decoding over the years, the “turbo principle” remains elusive to master analytically, thereby inciting interest from researchers outside the communications domain. In this spirit, we develop a tutorial presentation of iterative decoding for parallel and serial concatenated codes, in terms hopefully accessible to a broader audience. We motivate iterative decoding as a computationally tractable attempt to approach maximum-likelihood decoding, and characterize fixed points in terms of a “consensus” property between constituent decoders. We review how the decoding algorithm for both parallel and serial concatenated codes coincides with an alternating projection algorithm, which allows one to identify conditions under which the algorithm indeed converges to a maximum-likelihood solution, in terms of particular likelihood functions factoring into the product of their marginals. The presentation emphasizes a common framework applicable to both parallel and serial concatenated codes.

  18. Functional anthology of intrinsic disorder. 2. Cellular components, domains, technical terms, developmental processes, and coding sequence diversities correlated with long disordered regions.

    Science.gov (United States)

    Vucetic, Slobodan; Xie, Hongbo; Iakoucheva, Lilia M; Oldfield, Christopher J; Dunker, A Keith; Obradovic, Zoran; Uversky, Vladimir N

    2007-05-01

    Biologically active proteins without stable ordered structure (i.e., intrinsically disordered proteins) are attracting increased attention. Functional repertoires of ordered and disordered proteins are very different, and the ability to differentiate whether a given function is associated with intrinsic disorder or with a well-folded protein is crucial for modern protein science. However, there is a large gap between the number of proteins experimentally confirmed to be disordered and their actual number in nature. As a result, studies of functional properties of confirmed disordered proteins, while helpful in revealing the functional diversity of protein disorder, provide only a limited view. To overcome this problem, a bioinformatics approach for comprehensive study of functional roles of protein disorder was proposed in the first paper of this series (Xie, H.; Vucetic, S.; Iakoucheva, L. M.; Oldfield, C. J.; Dunker, A. K.; Obradovic, Z.; Uversky, V. N. Functional anthology of intrinsic disorder. 1. Biological processes and functions of proteins with long disordered regions. J. Proteome Res. 2007, 5, 1882-1898). Applying this novel approach to Swiss-Prot sequences and functional keywords, we found over 238 and 302 keywords to be strongly positively or negatively correlated, respectively, with long intrinsically disordered regions. This paper describes approximately 90 Swiss-Prot keywords attributed to the cellular components, domains, technical terms, developmental processes, and coding sequence diversities possessing strong positive and negative correlation with long disordered regions.

  19. Functional Anthology of Intrinsic Disorder. II. Cellular Components, Domains, Technical Terms, Developmental Processes and Coding Sequence Diversities Correlated with Long Disordered Regions

    Science.gov (United States)

    Vucetic, Slobodan; Xie, Hongbo; Iakoucheva, Lilia M.; Oldfield, Christopher J.; Dunker, A. Keith; Obradovic, Zoran; Uversky, Vladimir N.

    2008-01-01

    Biologically active proteins without stable ordered structure (i.e., intrinsically disordered proteins) are attracting increased attention. Functional repertoires of ordered and disordered proteins are very different, and the ability to differentiate whether a given function is associated with intrinsic disorder or with a well-folded protein is crucial for modern protein science. However, there is a large gap between the number of proteins experimentally confirmed to be disordered and their actual number in nature. As a result, studies of functional properties of confirmed disordered proteins, while helpful in revealing the functional diversity of protein disorder, provide only a limited view. To overcome this problem, a bioinformatics approach for comprehensive study of functional roles of protein disorder was proposed in the first paper of this series (Xie H., Vucetic S., Iakoucheva L.M., Oldfield C.J., Dunker A.K., Obradovic Z., Uversky V.N. (2006) Functional anthology of intrinsic disorder. I. Biological processes and functions of proteins with long disordered regions. J. Proteome Res.). Applying this novel approach to Swiss-Prot sequences and functional keywords, we found over 238 and 302 keywords to be strongly positively or negatively correlated, respectively, with long intrinsically disordered regions. This paper describes ~90 Swiss-Prot keywords attributed to the cellular components, domains, technical terms, developmental processes and coding sequence diversities possessing strong positive and negative correlation with long disordered regions. PMID:17391015

  20. A Novel Error Resilient Scheme for Wavelet-based Image Coding Over Packet Networks

    OpenAIRE

    WenZhu Sun; HongYu Wang; DaXing Qian

    2012-01-01

    this paper presents a robust transmission strategy for wavelet based scalable bit stream over packet erasure channel. By taking the advantage of the bit plane coding and the multiple description coding, the proposed strategy adopts layered multiple description coding (LMDC) for the embedded wavelet coders to improve the error resistant capability of the important bit planes in the meaning of D(R) function. Then, the post-compression rate-distortion (PCRD) optimization process is used to impro...

  1. Entropy Coding in HEVC

    OpenAIRE

    Sze, Vivienne; Marpe, Detlev

    2014-01-01

    Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...

  2. Quantum Codes From Negacyclic Codes over Group Ring ( Fq + υFq) G

    International Nuclear Information System (INIS)

    Koroglu, Mehmet E.; Siap, Irfan

    2016-01-01

    In this paper, we determine self dual and self orthogonal codes arising from negacyclic codes over the group ring ( F q + υF q ) G . By taking a suitable Gray image of these codes we obtain many good parameter quantum error-correcting codes over F q . (paper)

  3. Trellis and turbo coding iterative and graph-based error control coding

    CERN Document Server

    Schlegel, Christian B

    2015-01-01

    This new edition has been extensively revised to reflect the progress in error control coding over the past few years. Over 60% of the material has been completely reworked, and 30% of the material is original. Convolutional, turbo, and low density parity-check (LDPC) coding and polar codes in a unified framework. Advanced research-related developments such as spatial coupling. A focus on algorithmic and implementation aspects of error control coding.

  4. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    Science.gov (United States)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  5. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    International Nuclear Information System (INIS)

    Holzrichter, J.F.; Ng, L.C.

    1998-01-01

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs

  6. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    Science.gov (United States)

    Holzrichter, John F.; Ng, Lawrence C.

    1998-01-01

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching.

  7. A Cerebellar Framework for Predictive Coding and Homeostatic Regulation in Depressive Disorder.

    Science.gov (United States)

    Schutter, Dennis J L G

    2016-02-01

    Depressive disorder is associated with abnormalities in the processing of reward and punishment signals and disturbances in homeostatic regulation. These abnormalities are proposed to impair error minimization routines for reducing uncertainty. Several lines of research point towards a role of the cerebellum in reward- and punishment-related predictive coding and homeostatic regulatory function in depressive disorder. Available functional and anatomical evidence suggests that in addition to the cortico-limbic networks, the cerebellum is part of the dysfunctional brain circuit in depressive disorder as well. It is proposed that impaired cerebellar function contributes to abnormalities in predictive coding and homeostatic dysregulation in depressive disorder. Further research on the role of the cerebellum in depressive disorder may further extend our knowledge on the functional and neural mechanisms of depressive disorder and development of novel antidepressant treatments strategies targeting the cerebellum.

  8. Recent Improvements to the IMPACT-T Parallel Particle Tracking Code

    International Nuclear Information System (INIS)

    Qiang, J.; Pogorelov, I.V.; Ryne, R.

    2006-01-01

    The IMPACT-T code is a parallel three-dimensional quasi-static beam dynamics code for modeling high brightness beams in photoinjectors and RF linacs. Developed under the US DOE Scientific Discovery through Advanced Computing (SciDAC) program, it includes several key features including a self-consistent calculation of 3D space-charge forces using a shifted and integrated Green function method, multiple energy bins for beams with large energy spread, and models for treating RF standing wave and traveling wave structures. In this paper, we report on recent improvements to the IMPACT-T code including modeling traveling wave structures, short-range transverse and longitudinal wakefields, and longitudinal coherent synchrotron radiation through bending magnets

  9. Do you write secure code?

    CERN Multimedia

    Computer Security Team

    2011-01-01

    At CERN, we are excellent at producing software, such as complex analysis jobs, sophisticated control programs, extensive monitoring tools, interactive web applications, etc. This software is usually highly functional, and fulfils the needs and requirements as defined by its author. However, due to time constraints or unintentional ignorance, security aspects are often neglected. Subsequently, it was even more embarrassing for the author to find out that his code flawed and was used to break into CERN computers, web pages or to steal data…   Thus, if you have the pleasure or task of producing software applications, take some time before and familiarize yourself with good programming practices. They should not only prevent basic security flaws in your code, but also improve its readability, maintainability and efficiency. Basic rules for good programming, as well as essential books on proper software development, can be found in the section for software developers on our security we...

  10. Constellation labeling optimization for bit-interleaved coded APSK

    Science.gov (United States)

    Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.

  11. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    Science.gov (United States)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop

  12. ComboCoding: Combined intra-/inter-flow network coding for TCP over disruptive MANETs

    Directory of Open Access Journals (Sweden)

    Chien-Chia Chen

    2011-07-01

    Full Text Available TCP over wireless networks is challenging due to random losses and ACK interference. Although network coding schemes have been proposed to improve TCP robustness against extreme random losses, a critical problem still remains of DATA–ACK interference. To address this issue, we use inter-flow coding between DATA and ACK to reduce the number of transmissions among nodes. In addition, we also utilize a “pipeline” random linear coding scheme with adaptive redundancy to overcome high packet loss over unreliable links. The resulting coding scheme, ComboCoding, combines intra-flow and inter-flow coding to provide robust TCP transmission in disruptive wireless networks. The main contributions of our scheme are twofold; the efficient combination of random linear coding and XOR coding on bi-directional streams (DATA and ACK, and the novel redundancy control scheme that adapts to time-varying and space-varying link loss. The adaptive ComboCoding was tested on a variable hop string topology with unstable links and on a multipath MANET with dynamic topology. Simulation results show that TCP with ComboCoding delivers higher throughput than with other coding options in high loss and mobile scenarios, while introducing minimal overhead in normal operation.

  13. Performance analysis of WS-EWC coded optical CDMA networks with/without LDPC codes

    Science.gov (United States)

    Huang, Chun-Ming; Huang, Jen-Fa; Yang, Chao-Chin

    2010-10-01

    One extended Welch-Costas (EWC) code family for the wavelength-division-multiplexing/spectral-amplitude coding (WDM/SAC; WS) optical code-division multiple-access (OCDMA) networks is proposed. This system has a superior performance as compared to the previous modified quadratic congruence (MQC) coded OCDMA networks. However, since the performance of such a network is unsatisfactory when the data bit rate is higher, one class of quasi-cyclic low-density parity-check (QC-LDPC) code is adopted to improve that. Simulation results show that the performance of the high-speed WS-EWC coded OCDMA network can be greatly improved by using the LDPC codes.

  14. A multiplex coding imaging spectrometer for X-ray astronomy

    International Nuclear Information System (INIS)

    Rocchia, R.; Deschamps, J.Y.; Koch-Miramond, L.; Tarrius, A.

    1985-06-01

    The paper describes a multiplex coding system associated with a solid state spectrometer Si(Li) designed to be placed at the focus of a grazing incidence telescope. In this instrument the spectrometric and imaging functions are separated. The coding system consists in a movable mask with pseudo randomly distributed holes, located in the focal plane of the telescope. The pixel size lies in the range 100-200 microns. The close association of the coding system with a Si(Li) detector gives an imaging spectrometer combining the good efficiency (50% between 0,5 and 10 keV) and energy resolution (ΔE approximately 90 to 160 eV) of solid state spectrometers with the spatial resolution of the mask. Simulations and results obtained with a laboratory model are presented

  15. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    Science.gov (United States)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  16. Distributed Video Coding for Multiview and Video-plus-depth Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo

    The interest in Distributed Video Coding (DVC) systems has grown considerably in the academic world in recent years. With DVC the correlation between frames is exploited at the decoder (joint decoding). The encoder codes the frame independently, performing relatively simple operations. Therefore......, with DVC the complexity is shifted from encoder to decoder, making the coding architecture a viable solution for encoders with limited resources. DVC may empower new applications which can benefit from this reversed coding architecture. Multiview Distributed Video Coding (M-DVC) is the application...... of the to-be-decoded frame. Another key element is the Residual estimation, indicating the reliability of the SI, which is used to calculate the parameters of the correlation noise model between SI and original frame. In this thesis new methods for Inter-camera SI generation are analyzed in the Stereo...

  17. Geometrical modification transfer between specific meshes of each coupled physical codes. Application to the Jules Horowitz research reactor experimental devices

    International Nuclear Information System (INIS)

    Duplex, B.

    2011-01-01

    The CEA develops and uses scientific software, called physical codes, in various physical disciplines to optimize installation and experimentation costs. During a study, several physical phenomena interact, so a code coupling and some data exchanges between different physical codes are required. Each physical code computes on a particular geometry, usually represented by a mesh composed of thousands to millions of elements. This PhD Thesis focuses on the geometrical modification transfer between specific meshes of each coupled physical code. First, it presents a physical code coupling method where deformations are computed by one of these codes. Next, it discusses the establishment of a model, common to different physical codes, grouping all the shared data. Finally, it covers the deformation transfers between meshes of the same geometry or adjacent geometries. Geometrical modifications are discrete data because they are based on a mesh. In order to permit every code to access deformations and to transfer them, a continuous representation is computed. Two functions are developed, one with a global support, and the other with a local support. Both functions combine a simplification method and a radial basis function network. A whole use case is dedicated to the Jules Horowitz reactor. The effect of differential dilatations on experimental device cooling is studied. (author) [fr

  18. Two "dual" families of Nearly-Linear Codes over ℤ p , p odd

    NARCIS (Netherlands)

    Asch, van A.G.; Tilborg, van H.C.A.

    2001-01-01

    Since the paper by Hammons e.a. [1], various authors have shown an enormous interest in linear codes over the ring Z4. A special weight function on Z4 was introduced and by means of the so called Gray map ¿ : Z4¿Z2 2 a relation was established between linear codes over Z4 and certain interesting

  19. Refactoring test code

    NARCIS (Netherlands)

    A. van Deursen (Arie); L.M.F. Moonen (Leon); A. van den Bergh; G. Kok

    2001-01-01

    textabstractTwo key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from

  20. Gauge color codes

    DEFF Research Database (Denmark)

    Bombin Palomo, Hector

    2015-01-01

    Color codes are topological stabilizer codes with unusual transversality properties. Here I show that their group of transversal gates is optimal and only depends on the spatial dimension, not the local geometry. I also introduce a generalized, subsystem version of color codes. In 3D they allow...