WorldWideScience

Sample records for sampling mcmis-ds code

  1. Mammographic Imaging Studies Using the Monte Carlo Image Simulation-Differential Sampling (MCMIS-DS) Code

    Energy Technology Data Exchange (ETDEWEB)

    Kuruvilla Verghese

    2002-04-05

    This report summarizes the highlights of the research performed under the 1-year NEER grant from the Department of Energy. The primary goal of this study was to investigate the effects of certain design changes in the Fisher Senoscan mammography system and in the degree of breast compression on the discernability of microcalcifications in calcification clusters often observed in mammograms with tumor lesions. The most important design change that one can contemplate in a digital mammography system to improve resolution of calcifications is the reduction of pixel dimensions of the digital detector. Breast compression is painful to the patient and is though to be a deterrent to women to get routine mammographic screening. Calcification clusters often serve as markers (indicators ) of breast cancer.

  2. Image classification by semisupervised sparse coding with confident unlabeled samples

    Science.gov (United States)

    Li, Xiao; Fang, Min; Wu, Jinqiao; He, Liang; Tian, Xian

    2017-09-01

    Sparse coding has achieved very excellent performance in image classification tasks, especially when the supervision information is incorporated into the dictionary learning process. However, there is a large amount of unlabeled samples that are expensive and boring to annotate. We propose an image classification algorithm by semisupervised sparse coding with confident unlabeled samples. In order to make the learnt sparse coding more discriminative, we select and annotate some confident unlabeled samples. A minimization model is developed in which the reconstruction error of the labeled, the selected unlabeled and the remaining unlabeled data and the classification error are integrated, which enhances the discriminant property of the dictionary and sparse representations. The experimental results on image classification tasks demonstrate that our algorithm can significantly improve the image classification performance.

  3. Molecular cancer classification using a meta-sample-based regularized robust coding method.

    Science.gov (United States)

    Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen

    2014-01-01

    Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.

  4. HLA-E regulatory and coding region variability and haplotypes in a Brazilian population sample.

    Science.gov (United States)

    Ramalho, Jaqueline; Veiga-Castelli, Luciana C; Donadi, Eduardo A; Mendes-Junior, Celso T; Castelli, Erick C

    2017-11-01

    The HLA-E gene is characterized by low but wide expression on different tissues. HLA-E is considered a conserved gene, being one of the least polymorphic class I HLA genes. The HLA-E molecule interacts with Natural Killer cell receptors and T lymphocytes receptors, and might activate or inhibit immune responses depending on the peptide associated with HLA-E and with which receptors HLA-E interacts to. Variable sites within the HLA-E regulatory and coding segments may influence the gene function by modifying its expression pattern or encoded molecule, thus, influencing its interaction with receptors and the peptide. Here we propose an approach to evaluate the gene structure, haplotype pattern and the complete HLA-E variability, including regulatory (promoter and 3'UTR) and coding segments (with introns), by using massively parallel sequencing. We investigated the variability of 420 samples from a very admixed population such as Brazilians by using this approach. Considering a segment of about 7kb, 63 variable sites were detected, arranged into 75 extended haplotypes. We detected 37 different promoter sequences (but few frequent ones), 27 different coding sequences (15 representing new HLA-E alleles) and 12 haplotypes at the 3'UTR segment, two of them presenting a summed frequency of 90%. Despite the number of coding alleles, they encode mainly two different full-length molecules, known as E*01:01 and E*01:03, which corresponds to about 90% of all. In addition, differently from what has been previously observed for other non classical HLA genes, the relationship among the HLA-E promoter, coding and 3'UTR haplotypes is not straightforward because the same promoter and 3'UTR haplotypes were many times associated with different HLA-E coding haplotypes. This data reinforces the presence of only two main full-length HLA-E molecules encoded by the many HLA-E alleles detected in our population sample. In addition, this data does indicate that the distal HLA-E promoter is by

  5. Use of Samples Differing Markedly in Salience May Encourage Use of a Single-Code/Default Strategy in Matching-to-Sample in Pigeons

    Science.gov (United States)

    Grant, Douglas S.

    2009-01-01

    To test the hypothesis that pigeons will only code the more salient sample when samples differ markedly in salience, pigeons were trained with samples consisting of a 2-s presentation of food (highly salient sample) and an 8-s presentation of keylight (less salient sample). During retention testing, pigeons tended to respond at longer delays as if…

  6. Sampling-based correlation estimation for distributed source coding under rate and complexity constraints.

    Science.gov (United States)

    Cheung, Ngai-Man; Wang, Huisheng; Ortega, Antonio

    2008-11-01

    In many practical distributed source coding (DSC) applications, correlation information has to be estimated at the encoder in order to determine the encoding rate. Coding efficiency depends strongly on the accuracy of this correlation estimation. While error in estimation is inevitable, the impact of estimation error on compression efficiency has not been sufficiently studied for the DSC problem. In this paper,we study correlation estimation subject to rate and complexity constraints, and its impact on coding efficiency in a DSC framework for practical distributed image and video applications. We focus on, in particular, applications where binary correlation models are exploited for Slepian-Wolf coding and sampling techniques are used to estimate the correlation, while extensions to other correlation models would also be briefly discussed. In the first part of this paper, we investigate the compression of binary data. We first propose a model to characterize the relationship between the number of samples used in estimation and the coding rate penalty, in the case of encoding of a single binary source. The model is then extended to scenarios where multiple binary sources are compressed, and based on the model we propose an algorithm to determine the number of samples allocated to different sources so that the overall rate penalty can be minimized, subject to a constraint on the total number of samples. The second part of this paper studies compression of continuous valued data. We propose a model-based estimation for the particular but important situations where binary bit-planes are extracted from a continuous-valued input source, and each bit-plane is compressed using DSC. The proposed model-based method first estimates the source and correlation noise models using continuous valued samples, and then uses the models to derive the bit-plane statistics analytically. We also extend the model-based estimation to the cases when bit-planes are extracted based on the

  7. How anonymous is 'anonymous'? Some suggestions towards a coherent universal coding system for genetic samples.

    Science.gov (United States)

    Schmidt, Harald; Callier, Shawneequa

    2012-05-01

    So-called 'anonymous' tissue samples are widely used in research. Because they lack externally identifying information, they are viewed as useful in reconciling conflicts between the control, privacy and confidentiality interests of those from whom the samples originated and the public (or commercial) interest in carrying out research, as reflected in 'consent or anonymise' policies. High level guidance documents suggest that withdrawal of consent and samples and the provision of feedback are impossible in the case of anonymous samples. In view of recent developments in science and consumer-driven genomics the authors argue that such statements are misleading and only muddle complex ethical questions about possible entitlements to control over samples. The authors therefore propose that terms such as 'anonymised', 'anonymous' or 'non-identifiable' be removed entirely from documents describing research samples, especially from those aimed at the public. This is necessary as a matter of conceptual clarity and because failure to do so may jeopardise public trust in the governance of large scale databases. As there is wide variation in the taxonomy for tissue samples and no uniform national or international standards, the authors propose that a numeral-based universal coding system be implemented that focuses on specifying incremental levels of identifiability, rather than use terms that imply that the reidentification of research samples and associated actions are categorically impossible.

  8. A Sample Calculation of Tritium Production and Distribution at VHTR by using TRITGO Code

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ik Kyu; Kim, D. H.; Lee, W. J

    2007-03-15

    TRITGO code was developed for estimating the tritium production and distribution of high temperature gas cooled reactor(HTGR), especially GTMHR350 by General Atomics. In this study, the tritium production and distribution of NHDD was analyzed by using TRITGO Code. The TRITGO code was improved by a simple method to calculate the tritium amount in IS Loop. The improved TRITGO input for the sample calculation was prepared based on GTMHR600 because the NHDD has been designed referring GTMHR600. The GTMHR350 input with related to the tritium distribution was directly used. The calculated tritium activity among the hydrogen produced in IS-Loop is 0.56 Bq/g- H2. This is a very satisfying result considering that the limited tritium activity of Japanese Regulation Guide is 5.6 Bq/g-H2. The basic system to analyze the tritium production and the distribution by using TRITGO was successfully constructed. However, there exists some uncertainties in tritium distribution models, the suggested method for IS-Loop, and the current input was not for NHDD but for GTMHR600. The qualitative analysis for the distribution model and the IS-Loop model and the quantitative analysis for the input should be done in the future.

  9. Translation of Korean Medicine Use to ICD-Codes Using National Health Insurance Service-National Sample Cohort

    Directory of Open Access Journals (Sweden)

    Ye-Seul Lee

    2016-01-01

    Full Text Available Background. Korean medicine was incorporated into the Korean Classification of Diseases (KCD 6 through the development of U codes (U20–U99. Studies of the burden of disease have used summary measures such as disability-adjusted life years. Although Korean medicine is included in the official health care system, studies of the burden of disease that include Korean medicine are lacking. Methods. A data-based approach was used with National Health Insurance Service-National Sample Cohort data for the year 2012. U code diagnoses for patients covered by National Health Insurance were collected. Using the main disease and subdisease codes, the proportion of U codes was redistributed into the related KCD 6 codes and visualized. U code and KCD code relevance was appraised prior to the analysis by consultation with medical professionals and from the beta draft version of the International Classification of Diseases-11 traditional medicine chapter. Results. This approach enabled redistribution of U codes into KCD 6 codes. Musculoskeletal diseases had the greatest increase in the burden of disease through this approach. Conclusion. This study provides a possible method of incorporating Korean medicine into burden of disease analyses through a data-based approach. Further studies should analyze potential yearly differences.

  10. Non-coding cancer driver candidates identified with a sample- and position-specific model of the somatic mutation rate

    Science.gov (United States)

    Juul, Malene; Bertl, Johanna; Guo, Qianyun; Nielsen, Morten Muhlig; Świtnicki, Michał; Hornshøj, Henrik; Madsen, Tobias; Hobolth, Asger; Pedersen, Jakob Skou

    2017-01-01

    Non-coding mutations may drive cancer development. Statistical detection of non-coding driver regions is challenged by a varying mutation rate and uncertainty of functional impact. Here, we develop a statistically founded non-coding driver-detection method, ncdDetect, which includes sample-specific mutational signatures, long-range mutation rate variation, and position-specific impact measures. Using ncdDetect, we screened non-coding regulatory regions of protein-coding genes across a pan-cancer set of whole-genomes (n = 505), which top-ranked known drivers and identified new candidates. For individual candidates, presence of non-coding mutations associates with altered expression or decreased patient survival across an independent pan-cancer sample set (n = 5454). This includes an antigen-presenting gene (CD1A), where 5’UTR mutations correlate significantly with decreased survival in melanoma. Additionally, mutations in a base-excision-repair gene (SMUG1) correlate with a C-to-T mutational-signature. Overall, we find that a rich model of mutational heterogeneity facilitates non-coding driver identification and integrative analysis points to candidates of potential clinical relevance. DOI: http://dx.doi.org/10.7554/eLife.21778.001 PMID:28362259

  11. Interpretation of conduit voltage measurements on the Poloidal Field Insert Sample using the CUDI-CICC numerical code

    NARCIS (Netherlands)

    Ilyin, Y.; Nijhuis, Arend; ten Kate, Herman H.J.

    2006-01-01

    The results of simulations with the CUDI–CICC code on the poloidal field insert sample (PFIS) tested in the SULTAN test facility are presented. The interpretations are based on current distribution analysis from self-field measurements with Hall sensor arrays and current sharing measurements. The

  12. mRNA transcript quantification in archival samples using multiplexed, color-coded probes

    Directory of Open Access Journals (Sweden)

    Gullane Patrick

    2011-05-01

    Full Text Available Abstract Background A recently developed probe-based technology, the NanoString nCounter™ gene expression system, has been shown to allow accurate mRNA transcript quantification using low amounts of total RNA. We assessed the ability of this technology for mRNA expression quantification in archived formalin-fixed, paraffin-embedded (FFPE oral carcinoma samples. Results We measured the mRNA transcript abundance of 20 genes (COL3A1, COL4A1, COL5A1, COL5A2, CTHRC1, CXCL1, CXCL13, MMP1, P4HA2, PDPN, PLOD2, POSTN, SDHA, SERPINE1, SERPINE2, SERPINH1, THBS2, TNC, GAPDH, RPS18 in 38 samples (19 paired fresh-frozen and FFPE oral carcinoma tissues, archived from 1997-2008 by both NanoString and SYBR Green I fluorescent dye-based quantitative real-time PCR (RQ-PCR. We compared gene expression data obtained by NanoString vs. RQ-PCR in both fresh-frozen and FFPE samples. Fresh-frozen samples showed a good overall Pearson correlation of 0.78, and FFPE samples showed a lower overall correlation coefficient of 0.59, which is likely due to sample quality. We found a higher correlation coefficient between fresh-frozen and FFPE samples analyzed by NanoString (r = 0.90 compared to fresh-frozen and FFPE samples analyzed by RQ-PCR (r = 0.50. In addition, NanoString data showed a higher mean correlation (r = 0.94 between individual fresh-frozen and FFPE sample pairs compared to RQ-PCR (r = 0.53. Conclusions Based on our results, we conclude that both technologies are useful for gene expression quantification in fresh-frozen or FFPE tissues; however, the probe-based NanoString method achieved superior gene expression quantification results when compared to RQ-PCR in archived FFPE samples. We believe that this newly developed technique is optimal for large-scale validation studies using total RNA isolated from archived, FFPE samples.

  13. mRNA transcript quantification in archival samples using multiplexed, color-coded probes.

    Science.gov (United States)

    Reis, Patricia P; Waldron, Levi; Goswami, Rashmi S; Xu, Wei; Xuan, Yali; Perez-Ordonez, Bayardo; Gullane, Patrick; Irish, Jonathan; Jurisica, Igor; Kamel-Reid, Suzanne

    2011-05-09

    A recently developed probe-based technology, the NanoString nCounter™ gene expression system, has been shown to allow accurate mRNA transcript quantification using low amounts of total RNA. We assessed the ability of this technology for mRNA expression quantification in archived formalin-fixed, paraffin-embedded (FFPE) oral carcinoma samples. We measured the mRNA transcript abundance of 20 genes (COL3A1, COL4A1, COL5A1, COL5A2, CTHRC1, CXCL1, CXCL13, MMP1, P4HA2, PDPN, PLOD2, POSTN, SDHA, SERPINE1, SERPINE2, SERPINH1, THBS2, TNC, GAPDH, RPS18) in 38 samples (19 paired fresh-frozen and FFPE oral carcinoma tissues, archived from 1997-2008) by both NanoString and SYBR Green I fluorescent dye-based quantitative real-time PCR (RQ-PCR). We compared gene expression data obtained by NanoString vs. RQ-PCR in both fresh-frozen and FFPE samples. Fresh-frozen samples showed a good overall Pearson correlation of 0.78, and FFPE samples showed a lower overall correlation coefficient of 0.59, which is likely due to sample quality. We found a higher correlation coefficient between fresh-frozen and FFPE samples analyzed by NanoString (r = 0.90) compared to fresh-frozen and FFPE samples analyzed by RQ-PCR (r = 0.50). In addition, NanoString data showed a higher mean correlation (r = 0.94) between individual fresh-frozen and FFPE sample pairs compared to RQ-PCR (r = 0.53). Based on our results, we conclude that both technologies are useful for gene expression quantification in fresh-frozen or FFPE tissues; however, the probe-based NanoString method achieved superior gene expression quantification results when compared to RQ-PCR in archived FFPE samples. We believe that this newly developed technique is optimal for large-scale validation studies using total RNA isolated from archived, FFPE samples.

  14. How anonymous is ‘anonymous’? Some suggestions towards a coherent universal coding system for genetic samples

    Science.gov (United States)

    Schmidt, Harald; Callier, Shawneequa

    2012-01-01

    So-called ‘anonymous’ tissue samples are widely used in research. Because they lack externally identifying information, they are viewed as useful in reconciling conflicts between the control, privacy and confidentiality interests of those from whom the samples originated and the public (or commercial) interest in carrying out research, as reflected in ‘consent or anonymise’ policies. High level guidance documents suggest that withdrawal of consent and samples and the provision of feedback are impossible in the case of anonymous samples. In view of recent developments in science and consumer-driven genomics the authors argue that such statements are misleading and only muddle complex ethical questions about possible entitlements to control over samples. The authors therefore propose that terms such as ‘anonymised’, ‘anonymous’ or ‘non-identifiable’ be removed entirely from documents describing research samples, especially from those aimed at the public. This is necessary as a matter of conceptual clarity and because failure to do so may jeopardise public trust in the governance of large scale databases. As there is wide variation in the taxonomy for tissue samples and no uniform national or international standards, the authors propose that a numeral-based universal coding system be implemented that focuses on specifying incremental levels of identifiability, rather than use terms that imply that the reidentification of research samples and associated actions are categorically impossible. PMID:22345546

  15. Image Processing Code for Sharpening Photoelastic Fringe Patterns and Its Usage in Determination of Stress Intensity Factors in a Sample Contact Problem

    OpenAIRE

    Khaleghian, Seyedmeysam; Emami, Anahita; Soltani, Nasser

    2015-01-01

    This study presented a type of image processing code which is used for sharpening photoelastic fringe patterns of transparent materials in photoelastic experiences to determine the stress distribution. C-Sharp software was utilized for coding the algorithm of this image processing method. For evaluation of this code, the results of a photoelastic experience of a sample contact problem between a half-plane with an oblique edge crack and a tilted wedge using this image processing method was com...

  16. The optimally sampled galaxy-wide stellar initial mass function. Observational tests and the publicly available GalIMF code

    Science.gov (United States)

    Yan, Zhiqiang; Jerabkova, Tereza; Kroupa, Pavel

    2017-11-01

    Here we present a full description of the integrated galaxy-wide initial mass function (IGIMF) theory in terms of the optimal sampling and compare it with available observations. Optimal sampling is the method we use to discretize the IMF deterministically into stellar masses. Evidence indicates that nature may be closer to deterministic sampling as observations suggest a smaller scatter of various relevant observables than random sampling would give, which may result from a high level of self-regulation during the star formation process. We document the variation of IGIMFs under various assumptions. The results of the IGIMF theory are consistent with the empirical relation between the total mass of a star cluster and the mass of its most massive star, and the empirical relation between the star formation rate (SFR) of a galaxy and the mass of its most massive cluster. Particularly, we note a natural agreement with the empirical relation between the IMF power-law index and the SFR of a galaxy. The IGIMF also results in a relation between the SFR of a galaxy and the mass of its most massive star such that, if there were no binaries, galaxies with SFR Python module, GalIMF, is provided allowing the calculation of the IGIMF and OSGIMF dependent on the galaxy-wide SFR and metallicity. A copy of the python code model is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/607/A126

  17. Code Betal to calculation Alpha/Beta activities in environmental samples; Programa de ordenador Betal para el calculo de la actividad Beta/Alfa de muestras ambientales

    Energy Technology Data Exchange (ETDEWEB)

    Romero, L.; Travesi, A.

    1983-07-01

    A codes, BETAL, was developed, written in FORTRAN IV, to automatize calculations and presentations of the result of the total alpha-beta activities measurements in environmental samples. This code performs the necessary calculations for transformation the activities measured in total counts, to pCi/1., bearing in mind the efficiency of the detector used and the other necessary parameters. Further more, it appraise the standard deviation of the result, and calculus the Lower limit of detection for each measurement. This code is written in iterative way by screen-operator dialogue, and asking the necessary data to perform the calculation of the activity in each case by a screen label. The code could be executed through any screen and keyboard terminal, (whose computer accepts Fortran IV) with a printer connected to the said computer. (Author) 5 refs.

  18. PCR assay based on DNA coding for 16S rRNA for detection and identification of mycobacteria in clinical samples

    NARCIS (Netherlands)

    Kox, L. F.; van Leeuwen, J.; Knijper, S.; Jansen, H. M.; Kolk, A. H.

    1995-01-01

    A PCR and a reverse cross blot hybridization assay were developed for the detection and identification of mycobacteria in clinical samples. The PCR amplifies a part of the DNA coding for 16S rRNA with a set of primers that is specific for the genus Mycobacterium and that flanks species-specific

  19. Coding of DNA samples and data in the pharmaceutical industry: current practices and future directions--perspective of the I-PWG.

    Science.gov (United States)

    Franc, M A; Cohen, N; Warner, A W; Shaw, P M; Groenen, P; Snapir, A

    2011-04-01

    DNA samples collected in clinical trials and stored for future research are valuable to pharmaceutical drug development. Given the perceived higher risk associated with genetic research, industry has implemented complex coding methods for DNA. Following years of experience with these methods and with addressing questions from institutional review boards (IRBs), ethics committees (ECs) and health authorities, the industry has started reexamining the extent of the added value offered by these methods. With the goal of harmonization, the Industry Pharmacogenomics Working Group (I-PWG) conducted a survey to gain an understanding of company practices for DNA coding and to solicit opinions on their effectiveness at protecting privacy. The results of the survey and the limitations of the coding methods are described. The I-PWG recommends dialogue with key stakeholders regarding coding practices such that equal standards are applied to DNA and non-DNA samples. The I-PWG believes that industry standards for privacy protection should provide adequate safeguards for DNA and non-DNA samples/data and suggests a need for more universal standards for samples stored for future research.

  20. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  1. Establishing and evaluating bar-code technology in blood sampling system: a model based on human centered human-centered design method.

    Science.gov (United States)

    Chou, Shin-Shang; Yan, Hsiu-Fang; Huang, Hsiu-Ya; Tseng, Kuan-Jui; Kuo, Shu-Chen

    2012-01-01

    This study intended to use a human-centered design study method to develop a bar-code technology in blood sampling process. By using the multilevel analysis to gather the information, the bar-code technology has been constructed to identify the patient's identification, simplify the work process, and prevent medical error rates. A Technology Acceptance Model questionnaire was developed to assess the effectiveness of system and the data of patient's identification and sample errors were collected daily. The average scores of 8 items users' perceived ease of use was 25.21(3.72), 9 items users' perceived usefulness was 28.53(5.00), and 14 items task-technology fit was 52.24(7.09), the rate of patient identification error and samples with order cancelled were down to zero, however, new errors were generated after the new system deployed; which were the position of barcode stickers on the sample tubes. Overall, more than half of nurses (62.5%) were willing to use the new system.

  2. Isotope-coded ESI-enhancing derivatization reagents for differential analysis, quantification and profiling of metabolites in biological samples by LC/MS: A review.

    Science.gov (United States)

    Higashi, Tatsuya; Ogawa, Shoujiro

    2016-10-25

    The analysis of the qualitative and quantitative changes of metabolites in body fluids and tissues yields valuable information for the diagnosis, pathological analysis and treatment of many diseases. Recently, liquid chromatography/electrospray ionization-(tandem) mass spectrometry [LC/ESI-MS(/MS)] has been widely used for these purposes due to the high separation capability of LC, broad coverage of ESI for various compounds and high specificity of MS(/MS). However, there are still two major problems to be solved regarding the biological sample analysis; lack of sensitivity and limited availability of stable isotope-labeled analogues (internal standards, ISs) for most metabolites. Stable isotope-coded derivatization (ICD) can be the answer for these problems. By the ICD, different isotope-coded moieties are introduced to the metabolites and one of the resulting derivatives can serve as the IS, which minimize the matrix effects. Furthermore, the derivatization can improve the ESI efficiency, fragmentation property in the MS/MS and chromatographic behavior of the metabolites, which lead to a high sensitivity and specificity in the various detection modes. Based on this background, this article reviews the recently-reported isotope-coded ESI-enhancing derivatization (ICEED) reagents, which are key components for the ICD-based LC/MS(/MS) studies, and their applications to the detection, identification, quantification and profiling of metabolites in human and animal samples. The LC/MS(/MS) using the ICEED reagents is the powerful method especially for the differential analysis (relative quantification) of metabolites in two comparative samples, simultaneous quantification of multiple metabolites whose stable isotope-labeled ISs are not available, and submetabolome profiling. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Cyclone Codes

    OpenAIRE

    Schindelhauer, Christian; Jakoby, Andreas; Köhler, Sven

    2016-01-01

    We introduce Cyclone codes which are rateless erasure resilient codes. They combine Pair codes with Luby Transform (LT) codes by computing a code symbol from a random set of data symbols using bitwise XOR and cyclic shift operations. The number of data symbols is chosen according to the Robust Soliton distribution. XOR and cyclic shift operations establish a unitary commutative ring if data symbols have a length of $p-1$ bits, for some prime number $p$. We consider the graph given by code sym...

  4. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  5. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Sammenfatning af de mest væsentlige pointer fra hovedrapporten: Dokumentation og evaluering af Coding Class......Sammenfatning af de mest væsentlige pointer fra hovedrapporten: Dokumentation og evaluering af Coding Class...

  6. code {poems}

    Directory of Open Access Journals (Sweden)

    Ishac Bertran

    2012-08-01

    Full Text Available "Exploring the potential of code to communicate at the level of poetry," the code­ {poems} project solicited submissions from code­writers in response to the notion of a poem, written in a software language which is semantically valid. These selections reveal the inner workings, constitutive elements, and styles of both a particular software and its authors.

  7. Sharing code.

    Science.gov (United States)

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  8. Analog Coding.

    Science.gov (United States)

    CODING, ANALOG SYSTEMS), INFORMATION THEORY, DATA TRANSMISSION SYSTEMS , TRANSMITTER RECEIVERS, WHITE NOISE, PROBABILITY, ERRORS, PROBABILITY DENSITY FUNCTIONS, DIFFERENTIAL EQUATIONS, SET THEORY, COMPUTER PROGRAMS

  9. Monte Carlo burnup code acceleration with the correlated sampling method. Preliminary test on an UOX cell with TRIPOLI-4{sup R}

    Energy Technology Data Exchange (ETDEWEB)

    Dieudonne, C.; Dumonteil, E.; Malvagi, F.; Diop, C. M. [Commissariat a l' Energie Atomique et aux Energies Alternatives CEA, Service d' Etude des Reacteurs et de Mathematiques Appliquees, DEN/DANS/DM2S/SERMA/LTSD, F91191 Gif-sur-Yvette cedex (France)

    2013-07-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple a Monte Carlo code to simulate the neutron transport to a deterministic method that computes the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3 dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the time-expensive Monte Carlo solver called at each time step. Therefore, great improvements in term of calculation time could be expected if one could get rid of Monte Carlo transport sequences. For example, it may seem interesting to run an initial Monte Carlo simulation only once, for the first time/burnup step, and then to use the concentration perturbation capability of the Monte Carlo code to replace the other time/burnup steps (the different burnup steps are seen like perturbations of the concentrations of the initial burnup step). This paper presents some advantages and limitations of this technique and preliminary results in terms of speed up and figure of merit. Finally, we will detail different possible calculation scheme based on that method. (authors)

  10. Divergence coding for convolutional codes

    Directory of Open Access Journals (Sweden)

    Valery Zolotarev

    2017-01-01

    Full Text Available In the paper we propose a new coding/decoding on the divergence principle. A new divergent multithreshold decoder (MTD for convolutional self-orthogonal codes contains two threshold elements. The second threshold element decodes the code with the code distance one greater than for the first threshold element. Errorcorrecting possibility of the new MTD modification have been higher than traditional MTD. Simulation results show that the performance of the divergent schemes allow to approach area of its effective work to channel capacity approximately on 0,5 dB. Note that we include the enough effective Viterbi decoder instead of the first threshold element, the divergence principle can reach more. Index Terms — error-correcting coding, convolutional code, decoder, multithreshold decoder, Viterbi algorithm.

  11. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...... development, Speaking Code unfolds an argument to undermine the distinctions between criticism and practice, and to emphasize the aesthetic and political aspects of software studies. Not reducible to its functional aspects, program code mirrors the instability inherent in the relationship of speech......; alternatives to mainstream development, from performances of the live-coding scene to the organizational forms of commons-based peer production; the democratic promise of social media and their paradoxical role in suppressing political expression; and the market’s emptying out of possibilities for free...

  12. Coding Labour

    Directory of Open Access Journals (Sweden)

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  13. Coding labour

    National Research Council Canada - National Science Library

    McCosker, Anthony; Milne, Esther

    2014-01-01

    ... software. Code encompasses the laws that regulate human affairs and the operation of capital, behavioural mores and accepted ways of acting, but it also defines the building blocks of life as DNA...

  14. Automated determination of the stable carbon isotopic composition (δ13C) of total dissolved inorganic carbon (DIC) and total nonpurgeable dissolved organic carbon (DOC) in aqueous samples: RSIL lab codes 1851 and 1852

    Science.gov (United States)

    Révész, Kinga M.; Doctor, Daniel H.

    2014-01-01

    The purposes of the Reston Stable Isotope Laboratory (RSIL) lab codes 1851 and 1852 are to determine the total carbon mass and the ratio of the stable isotopes of carbon (δ13C) for total dissolved inorganic carbon (DIC, lab code 1851) and total nonpurgeable dissolved organic carbon (DOC, lab code 1852) in aqueous samples. The analysis procedure is automated according to a method that utilizes a total carbon analyzer as a peripheral sample preparation device for analysis of carbon dioxide (CO2) gas by a continuous-flow isotope ratio mass spectrometer (CF-IRMS). The carbon analyzer produces CO2 and determines the carbon mass in parts per million (ppm) of DIC and DOC in each sample separately, and the CF-IRMS determines the carbon isotope ratio of the produced CO2. This configuration provides a fully automated analysis of total carbon mass and δ13C with no operator intervention, additional sample preparation, or other manual analysis. To determine the DIC, the carbon analyzer transfers a specified sample volume to a heated (70 °C) reaction vessel with a preprogrammed volume of 10% phosphoric acid (H3PO4), which allows the carbonate and bicarbonate species in the sample to dissociate to CO2. The CO2 from the reacted sample is subsequently purged with a flow of helium gas that sweeps the CO2 through an infrared CO2 detector and quantifies the CO2. The CO2 is then carried through a high-temperature (650 °C) scrubber reactor, a series of water traps, and ultimately to the inlet of the mass spectrometer. For the analysis of total dissolved organic carbon, the carbon analyzer performs a second step on the sample in the heated reaction vessel during which a preprogrammed volume of sodium persulfate (Na2S2O8) is added, and the hydroxyl radicals oxidize the organics to CO2. Samples containing 2 ppm to 30,000 ppm of carbon are analyzed. The precision of the carbon isotope analysis is within 0.3 per mill for DIC, and within 0.5 per mill for DOC.

  15. Introduction to coding and information theory

    CERN Document Server

    Roman, Steven

    1997-01-01

    This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.

  16. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  17. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  18. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  19. Network Coding

    Indian Academy of Sciences (India)

    Network coding is a technique to increase the amount of information °ow in a network by mak- ing the key observation that information °ow is fundamentally different from commodity °ow. Whereas, under traditional methods of opera- tion of data networks, intermediate nodes are restricted to simply forwarding their incoming.

  20. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...

  1. Network Coding

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621. Keywords.

  2. Expander Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.

  3. Polar Codes

    Science.gov (United States)

    2014-12-01

    added by the decoder is K/ρ+Td. By the last assumption, Td and Te are both ≤ K/ρ, so the total latency added is between 2K/ρ and 4K /ρ. For example...better resolution near the decision point. Reference [12] showed that in decoding a (1024, 512) polar code, using 6-bit LLRs resulted in per- formance

  4. Convolutional-Code-Specific CRC Code Design

    OpenAIRE

    Lou, Chung-Yu; Daneshrad, Babak; Wesel, Richard D.

    2015-01-01

    Cyclic redundancy check (CRC) codes check if a codeword is correctly received. This paper presents an algorithm to design CRC codes that are optimized for the code-specific error behavior of a specified feedforward convolutional code. The algorithm utilizes two distinct approaches to computing undetected error probability of a CRC code used with a specific convolutional code. The first approach enumerates the error patterns of the convolutional code and tests if each of them is detectable. Th...

  5. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  6. Concatenated codes with convolutional inner codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Thommesen, Christian; Zyablov, Viktor

    1988-01-01

    The minimum distance of concatenated codes with Reed-Solomon outer codes and convolutional inner codes is studied. For suitable combinations of parameters the minimum distance can be lower-bounded by the product of the minimum distances of the inner and outer codes. For a randomized ensemble...... of concatenated codes a lower bound of the Gilbert-Varshamov type is proved...

  7. Multiplexed coded time domain sampling with metamaterials.

    Science.gov (United States)

    Nadell, Christian C; Fan, Kebin; Suen, Jonathan Y; Padilla, Willie J

    2017-10-16

    The far infrared region of the electromagnetic spectrum often necessitates the use of thermal detectors that, by nature, typically have poor response times and diminished sensitivities, at least compared to adjacent bands. However, many signals of interest contain frequency components far too fast to be reliably measured with such detectors, and hence expensive and inefficient alternatives are brought to bear. Here we propose and experimentally validate a new method leveraging the speed and scalability of dynamic metamaterial modulators to encode high-frequency signal components at a lower frequency, making them reliably measurable with thermal detectors that would otherwise be too slow. An optimal weighing scheme design in the time domain is realized, the result being an imaging system whose time resolution is independent of detector speed and is rather limited only by the speed of the modulator and the reproducibility of the signal of interest.

  8. Efficiency of a model human image code

    Science.gov (United States)

    Watson, Andrew B.

    1987-01-01

    Hypothetical schemes for neural representation of visual information can be expressed as explicit image codes. Here, a code modeled on the simple cells of the primate striate cortex is explored. The Cortex transform maps a digital image into a set of subimages (layers) that are bandpass in spatial frequency and orientation. The layers are sampled so as to minimize the number of samples and still avoid aliasing. Samples are quantized in a manner that exploits the bandpass contrast-masking properties of human vision. The entropy of the samples is computed to provide a lower bound on the code size. Finally, the image is reconstructed from the code. Psychophysical methods are derived for comparing the original and reconstructed images to evaluate the sufficiency of the code. When each resolution is coded at the threshold for detection artifacts, the image-code size is about 1 bit/pixel.

  9. Sinusoidal Coding.

    Science.gov (United States)

    1995-01-01

    samples of an underlying vocal tract envelope, in MBE they are allowed to be un- constrained free variables and are chosen to render s(n) a minimum-mean...Bandwidth Reduction and Time Scaling of Spech Signals," in IEEE Trans. Acoust., Speech and Signal Proc, ASSP-27, (2), 1979, pp. 121-133. [7] P. Hedelin

  10. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  11. Research on Primary Shielding Calculation Source Generation Codes

    OpenAIRE

    Zheng Zheng; Mei Qiliang; Li Hui; Shangguan Danhua; Zhang Guangchun

    2017-01-01

    Primary Shielding Calculation (PSC) plays an important role in reactor shielding design and analysis. In order to facilitate PSC, a source generation code is developed to generate cumulative distribution functions (CDF) for the source particle sample code of the J Monte Carlo Transport (JMCT) code, and a source particle sample code is deveoped to sample source particle directions, types, coordinates, energy and weights from the CDFs. A source generation code is developed to transform three di...

  12. Affine Grassmann codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Beelen, Peter; Ghorpade, Sudhir Ramakant

    2010-01-01

    We consider a new class of linear codes, called affine Grassmann codes. These can be viewed as a variant of generalized Reed-Muller codes and are closely related to Grassmann codes.We determine the length, dimension, and the minimum distance of any affine Grassmann code. Moreover, we show...

  13. Turbo Codes Extended with Outer BCH Code

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl

    1996-01-01

    The "error floor" observed in several simulations with the turbo codes is verified by calculation of an upper bound to the bit error rate for the ensemble of all interleavers. Also an easy way to calculate the weight enumerator used in this bound is presented. An extended coding scheme is propose...... including an outer BCH code correcting a few bit errors.......The "error floor" observed in several simulations with the turbo codes is verified by calculation of an upper bound to the bit error rate for the ensemble of all interleavers. Also an easy way to calculate the weight enumerator used in this bound is presented. An extended coding scheme is proposed...

  14. Rateless feedback codes

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  15. Generalized concatenated quantum codes

    Science.gov (United States)

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng, Bei

    2009-05-01

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  16. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  17. Coding for dummies

    CERN Document Server

    Abraham, Nikhil

    2015-01-01

    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  18. Supervised Transfer Sparse Coding

    KAUST Repository

    Al-Shedivat, Maruan

    2014-07-27

    A combination of the sparse coding and transfer learn- ing techniques was shown to be accurate and robust in classification tasks where training and testing objects have a shared feature space but are sampled from differ- ent underlying distributions, i.e., belong to different do- mains. The key assumption in such case is that in spite of the domain disparity, samples from different domains share some common hidden factors. Previous methods often assumed that all the objects in the target domain are unlabeled, and thus the training set solely comprised objects from the source domain. However, in real world applications, the target domain often has some labeled objects, or one can always manually label a small num- ber of them. In this paper, we explore such possibil- ity and show how a small number of labeled data in the target domain can significantly leverage classifica- tion accuracy of the state-of-the-art transfer sparse cod- ing methods. We further propose a unified framework named supervised transfer sparse coding (STSC) which simultaneously optimizes sparse representation, domain transfer and classification. Experimental results on three applications demonstrate that a little manual labeling and then learning the model in a supervised fashion can significantly improve classification accuracy.

  19. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  20. Locally orderless registration code

    DEFF Research Database (Denmark)

    2012-01-01

    This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....

  1. Algebraic geometric codes

    Science.gov (United States)

    Shahshahani, M.

    1991-01-01

    The performance characteristics are discussed of certain algebraic geometric codes. Algebraic geometric codes have good minimum distance properties. On many channels they outperform other comparable block codes; therefore, one would expect them eventually to replace some of the block codes used in communications systems. It is suggested that it is unlikely that they will become useful substitutes for the Reed-Solomon codes used by the Deep Space Network in the near future. However, they may be applicable to systems where the signal to noise ratio is sufficiently high so that block codes would be more suitable than convolutional or concatenated codes.

  2. Monomial-like codes

    CERN Document Server

    Martinez-Moro, Edgar; Ozbudak, Ferruh; Szabo, Steve

    2010-01-01

    As a generalization of cyclic codes of length p^s over F_{p^a}, we study n-dimensional cyclic codes of length p^{s_1} X ... X p^{s_n} over F_{p^a} generated by a single "monomial". Namely, we study multi-variable cyclic codes of the form in F_{p^a}[x_1...x_n] / . We call such codes monomial-like codes. We show that these codes arise from the product of certain single variable codes and we determine their minimum Hamming distance. We determine the dual of monomial-like codes yielding a parity check matrix. We also present an alternative way of constructing a parity check matrix using the Hasse derivative. We study the weight hierarchy of certain monomial like codes. We simplify an expression that gives us the weight hierarchy of these codes.

  3. QR Codes 101

    Science.gov (United States)

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  4. Dress Codes in the Public Schools: Principals, Policies, and Precepts.

    Science.gov (United States)

    DeMitchell, Todd A.; Fossey, Richard; Cobb, Casey

    2000-01-01

    Responses from 157 principals (65 percent of a national sample) showed strong support for dress codes. Research focuses on the perception of school principals regarding dress codes, analyzes dress codes for common features, and proposes a constitutional standard of review for contested dress codes. (58 footnotes) (MLF)

  5. DNA Barcoding through Quaternary LDPC Codes.

    Science.gov (United States)

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).

  6. DNA Barcoding through Quaternary LDPC Codes.

    Directory of Open Access Journals (Sweden)

    Elizabeth Tapia

    Full Text Available For many parallel applications of Next-Generation Sequencing (NGS technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH or have intrinsic poor error correcting abilities (Hamming. Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9 at the expense of a rate of read losses just in the order of 10(-6.

  7. TIPONLINE Code Table

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Coded items are entered in the tiponline data entry program. The codes and their explanations are necessary in order to use the data

  8. Coding for optical channels

    CERN Document Server

    Djordjevic, Ivan; Vasic, Bane

    2010-01-01

    This unique book provides a coherent and comprehensive introduction to the fundamentals of optical communications, signal processing and coding for optical channels. It is the first to integrate the fundamentals of coding theory and optical communication.

  9. ARC Code TI: ROC Curve Code Augmentation

    Data.gov (United States)

    National Aeronautics and Space Administration — ROC (Receiver Operating Characteristic) curve Code Augmentation was written by Rodney Martin and John Stutz at NASA Ames Research Center and is a modification of ROC...

  10. Gauge color codes

    DEFF Research Database (Denmark)

    Bombin Palomo, Hector

    2015-01-01

    Color codes are topological stabilizer codes with unusual transversality properties. Here I show that their group of transversal gates is optimal and only depends on the spatial dimension, not the local geometry. I also introduce a generalized, subsystem version of color codes. In 3D they allow...

  11. Refactoring test code

    NARCIS (Netherlands)

    A. van Deursen (Arie); L.M.F. Moonen (Leon); A. van den Bergh; G. Kok

    2001-01-01

    textabstractTwo key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from

  12. The Procions` code; Le code Procions

    Energy Technology Data Exchange (ETDEWEB)

    Deck, D.; Samba, G.

    1994-12-19

    This paper presents a new code to simulate plasmas generated by inertial confinement. This multi-kinds kinetic code is done with no angular approximation concerning ions and will work in plan and spherical geometry. First, the physical model is presented, using Fokker-Plank. Then, the numerical model is introduced in order to solve the Fokker-Plank operator under the Rosenbluth form. At the end, several numerical tests are proposed. (TEC). 17 refs., 27 figs.

  13. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    , Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  14. Discriminative sparse coding on multi-manifolds

    KAUST Repository

    Wang, J.J.-Y.

    2013-09-26

    Sparse coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics. However, the conventional sparse coding algorithms and their manifold-regularized variants (graph sparse coding and Laplacian sparse coding), learn codebooks and codes in an unsupervised manner and neglect class information that is available in the training set. To address this problem, we propose a novel discriminative sparse coding method based on multi-manifolds, that learns discriminative class-conditioned codebooks and sparse codes from both data feature spaces and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the sparse coding as a manifold-manifold matching problem and learn class-conditioned codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data sample-manifold matching-based strategy to classify the unlabeled data samples. Experimental results on somatic mutations identification and breast tumor classification based on ultrasonic images demonstrate the efficacy of the proposed data representation and classification approach. 2013 The Authors. All rights reserved.

  15. DLLExternalCode

    Energy Technology Data Exchange (ETDEWEB)

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  16. Noisy Network Coding

    CERN Document Server

    Lim, Sung Hoon; Gamal, Abbas El; Chung, Sae-Young

    2010-01-01

    A noisy network coding scheme for sending multiple sources over a general noisy network is presented. For multi-source multicast networks, the scheme naturally extends both network coding over noiseless networks by Ahlswede, Cai, Li, and Yeung, and compress-forward coding for the relay channel by Cover and El Gamal to general discrete memoryless and Gaussian networks. The scheme also recovers as special cases the results on coding for wireless relay networks and deterministic networks by Avestimehr, Diggavi, and Tse, and coding for wireless erasure networks by Dana, Gowaikar, Palanki, Hassibi, and Effros. The scheme involves message repetition coding, relay signal compression, and simultaneous decoding. Unlike previous compress--forward schemes, where independent messages are sent over multiple blocks, the same message is sent multiple times using independent codebooks as in the network coding scheme for cyclic networks. Furthermore, the relays do not use Wyner--Ziv binning as in previous compress-forward sch...

  17. Schroedinger’s code: Source code availability and transparency in astrophysics

    Science.gov (United States)

    Ryan, PW; Allen, Alice; Teuben, Peter

    2018-01-01

    Astronomers use software for their research, but how many of the codes they use are available as source code? We examined a sample of 166 papers from 2015 for clearly identified software use, then searched for source code for the software packages mentioned in these research papers. We categorized the software to indicate whether source code is available for download and whether there are restrictions to accessing it, and if source code was not available, whether some other form of the software, such as a binary, was. Over 40% of the source code for the software used in our sample was not available for download.As URLs have often been used as proxy citations for software, we also extracted URLs from one journal’s 2015 research articles, removed those from certain long-term, reliable domains, and tested the remainder to determine what percentage of these URLs were still accessible in September and October, 2017.

  18. The Aesthetics of Coding

    DEFF Research Database (Denmark)

    Andersen, Christian Ulrik

    2007-01-01

    Computer art is often associated with computer-generated expressions (digitally manipulated audio/images in music, video, stage design, media facades, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated...... code, etc.). The presentation relates this artistic fascination of code to a media critique expressed by Florian Cramer, claiming that the graphical interface represents a media separation (of text/code and image) causing alienation to the computer’s materiality. Cramer is thus the voice of a new ‘code...... avant-garde’. In line with Cramer, the artists Alex McLean and Adrian Ward (aka Slub) declare: “art-oriented programming needs to acknowledge the conditions of its own making – its poesis.” By analysing the Live Coding performances of Slub (where they program computer music live), the presentation...

  19. Opening up codings?

    DEFF Research Database (Denmark)

    Steensig, Jakob; Heinemann, Trine

    2015-01-01

    doing formal coding and when doing more “traditional” conversation analysis research based on collections. We are more wary, however, of the implication that coding-based research is the end result of a process that starts with qualitative investigations and ends with categories that can be coded......We welcome Tanya Stivers’s discussion (Stivers, 2015/this issue) of coding social interaction and find that her descriptions of the processes of coding open up important avenues for discussion, among other things of the precise ad hoc considerations that researchers need to bear in mind, both when....... Instead we propose that the promise of coding-based research lies in its ability to open up new qualitative questions....

  20. Overview of Code Verification

    Science.gov (United States)

    1983-01-01

    The verified code for the SIFT Executive is not the code that executes on the SIFT system as delivered. The running versions of the SIFT Executive contain optimizations and special code relating to the messy interface to the hardware broadcast interface and to packing of data to conserve space in the store of the BDX930 processors. The running code was in fact developed prior to and without consideration of any mechanical verification. This was regarded as necessary experimentation with the SIFT hardware and special purpose Pascal compiler. The Pascal code sections cover: the selection of a schedule from the global executive broadcast, scheduling, dispatching, three way voting, and error reporting actions of the SIFT Executive. Not included in these sections of Pascal code are: the global executive, five way voting, clock synchronization, interactive consistency, low level broadcasting, and program loading, initialization, and schedule construction.

  1. Phonological coding during reading

    Science.gov (United States)

    Leinenger, Mallorie

    2014-01-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679

  2. The aeroelastic code FLEXLAST

    Energy Technology Data Exchange (ETDEWEB)

    Visser, B. [Stork Product Eng., Amsterdam (Netherlands)

    1996-09-01

    To support the discussion on aeroelastic codes, a description of the code FLEXLAST was given and experiences within benchmarks and measurement programmes were summarized. The code FLEXLAST has been developed since 1982 at Stork Product Engineering (SPE). Since 1992 FLEXLAST has been used by Dutch industries for wind turbine and rotor design. Based on the comparison with measurements, it can be concluded that the main shortcomings of wind turbine modelling lie in the field of aerodynamics, wind field and wake modelling. (au)

  3. Generating code adapted for interlinking legacy scalar code and extended vector code

    Science.gov (United States)

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  4. Decoding of Cyclic Codes,

    Science.gov (United States)

    INFORMATION THEORY, *DECODING), (* DATA TRANSMISSION SYSTEMS , DECODING), STATISTICAL ANALYSIS, STOCHASTIC PROCESSES, CODING, WHITE NOISE, NUMBER THEORY, CORRECTIONS, BINARY ARITHMETIC, SHIFT REGISTERS, CONTROL SYSTEMS, USSR

  5. ARC Code TI: ACCEPT

    Data.gov (United States)

    National Aeronautics and Space Administration — ACCEPT consists of an overall software infrastructure framework and two main software components. The software infrastructure framework consists of code written to...

  6. Diameter Perfect Lee Codes

    CERN Document Server

    Horak, Peter

    2011-01-01

    Lee codes have been intensively studied for more than 40 years. Interest in these codes has been triggered by the Golomb-Welch conjecture on the existence of perfect error-correcting Lee codes. In this paper we deal with the existence and enumeration of diameter perfect Lee codes. As main results we determine all q for which there exists a linear diameter-4 perfect Lee code of word length n over Z_{q}, and prove that for each n\\geq3 there are unaccountably many diameter-4 perfect Lee codes of word length n over Z. This is in a strict contrast with perfect error-correcting Lee codes of word length n over Z as there is a unique such code for n=3, and its is conjectured that this is always the case when 2n+1 is a prime. Diameter perfect Lee codes will be constructed by an algebraic construction that is based on a group homomorphism. This will allow us to design an efficient algorithm for their decoding.

  7. Expander chunked codes

    Science.gov (United States)

    Tang, Bin; Yang, Shenghao; Ye, Baoliu; Yin, Yitong; Lu, Sanglu

    2015-12-01

    Chunked codes are efficient random linear network coding (RLNC) schemes with low computational cost, where the input packets are encoded into small chunks (i.e., subsets of the coded packets). During the network transmission, RLNC is performed within each chunk. In this paper, we first introduce a simple transfer matrix model to characterize the transmission of chunks and derive some basic properties of the model to facilitate the performance analysis. We then focus on the design of overlapped chunked codes, a class of chunked codes whose chunks are non-disjoint subsets of input packets, which are of special interest since they can be encoded with negligible computational cost and in a causal fashion. We propose expander chunked (EC) codes, the first class of overlapped chunked codes that have an analyzable performance, where the construction of the chunks makes use of regular graphs. Numerical and simulation results show that in some practical settings, EC codes can achieve rates within 91 to 97 % of the optimum and outperform the state-of-the-art overlapped chunked codes significantly.

  8. QR codes for dummies

    CERN Document Server

    Waters, Joe

    2012-01-01

    Find out how to effectively create, use, and track QR codes QR (Quick Response) codes are popping up everywhere, and businesses are reaping the rewards. Get in on the action with the no-nonsense advice in this streamlined, portable guide. You'll find out how to get started, plan your strategy, and actually create the codes. Then you'll learn to link codes to mobile-friendly content, track your results, and develop ways to give your customers value that will keep them coming back. It's all presented in the straightforward style you've come to know and love, with a dash of humor thrown

  9. On {\\sigma}-LCD codes

    OpenAIRE

    Carlet, Claude; Mesnager, Sihem; Tang, Chunming; Qi, Yanfeng

    2017-01-01

    Linear complementary pairs (LCP) of codes play an important role in armoring implementations against side-channel attacks and fault injection attacks. One of the most common ways to construct LCP of codes is to use Euclidean linear complementary dual (LCD) codes. In this paper, we first introduce the concept of linear codes with $\\sigma$ complementary dual ($\\sigma$-LCD), which includes known Euclidean LCD codes, Hermitian LCD codes, and Galois LCD codes. As Euclidean LCD codes, $\\sigma$-LCD ...

  10. Identifying personal microbiomes using metagenomic codes.

    Science.gov (United States)

    Franzosa, Eric A; Huang, Katherine; Meadow, James F; Gevers, Dirk; Lemon, Katherine P; Bohannan, Brendan J M; Huttenhower, Curtis

    2015-06-02

    Community composition within the human microbiome varies across individuals, but it remains unknown if this variation is sufficient to uniquely identify individuals within large populations or stable enough to identify them over time. We investigated this by developing a hitting set-based coding algorithm and applying it to the Human Microbiome Project population. Our approach defined body site-specific metagenomic codes: sets of microbial taxa or genes prioritized to uniquely and stably identify individuals. Codes capturing strain variation in clade-specific marker genes were able to distinguish among 100s of individuals at an initial sampling time point. In comparisons with follow-up samples collected 30-300 d later, ∼30% of individuals could still be uniquely pinpointed using metagenomic codes from a typical body site; coincidental (false positive) matches were rare. Codes based on the gut microbiome were exceptionally stable and pinpointed >80% of individuals. The failure of a code to match its owner at a later time point was largely explained by the loss of specific microbial strains (at current limits of detection) and was only weakly associated with the length of the sampling interval. In addition to highlighting patterns of temporal variation in the ecology of the human microbiome, this work demonstrates the feasibility of microbiome-based identifiability-a result with important ethical implications for microbiome study design. The datasets and code used in this work are available for download from huttenhower.sph.harvard.edu/idability.

  11. Scrum Code Camps

    DEFF Research Database (Denmark)

    Pries-Heje, Jan; Pries-Heje, Lene; Dahlgaard, Bente

    2013-01-01

    is required. In this paper we present the design of such a new approach, the Scrum Code Camp, which can be used to assess agile team capability in a transparent and consistent way. A design science research approach is used to analyze properties of two instances of the Scrum Code Camp where seven agile teams...

  12. Error Correcting Codes

    Indian Academy of Sciences (India)

    be fixed to define codes over such domains). New decoding schemes that take advantage of such connections can be devised. These may soon show up in a technique called code division multiple access (CDMA) which is proposed as a basis for digital cellular communication. CDMA provides a facility for many users to ...

  13. Codes of Conduct

    Science.gov (United States)

    Million, June

    2004-01-01

    Most schools have a code of conduct, pledge, or behavioral standards, set by the district or school board with the school community. In this article, the author features some schools that created a new vision of instilling code of conducts to students based on work quality, respect, safety and courtesy. She suggests that communicating the code…

  14. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March 1997 pp 33-47. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/002/03/0033-0047 ...

  15. Code Generation = A* + BURS

    NARCIS (Netherlands)

    Nymeyer, Albert; Katoen, Joost P.; Westra, Ymte; Alblas, H.; Gyimóthy, Tibor

    1996-01-01

    A system called BURS that is based on term rewrite systems and a search algorithm A* are combined to produce a code generator that generates optimal code. The theory underlying BURS is re-developed, formalised and explained in this work. The search algorithm uses a cost heuristic that is derived

  16. Dress Codes for Teachers?

    Science.gov (United States)

    Million, June

    2004-01-01

    In this article, the author discusses an e-mail survey of principals from across the country regarding whether or not their school had a formal staff dress code. The results indicate that most did not have a formal dress code, but agreed that professional dress for teachers was not only necessary, but showed respect for the school and had a…

  17. Informal control code logic

    NARCIS (Netherlands)

    Bergstra, J.A.

    2010-01-01

    General definitions as well as rules of reasoning regarding control code production, distribution, deployment, and usage are described. The role of testing, trust, confidence and risk analysis is considered. A rationale for control code testing is sought and found for the case of safety critical

  18. Interleaved Product LDPC Codes

    OpenAIRE

    Baldi, Marco; Cancellieri, Giovanni; Chiaraluce, Franco

    2011-01-01

    Product LDPC codes take advantage of LDPC decoding algorithms and the high minimum distance of product codes. We propose to add suitable interleavers to improve the waterfall performance of LDPC decoding. Interleaving also reduces the number of low weight codewords, that gives a further advantage in the error floor region.

  19. Nuremberg code turns 60

    OpenAIRE

    Thieren, Michel; Mauron, Alexandre

    2007-01-01

    This month marks sixty years since the Nuremberg code – the basic text of modern medical ethics – was issued. The principles in this code were articulated in the context of the Nuremberg trials in 1947. We would like to use this anniversary to examine its ability to address the ethical challenges of our time.

  20. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 1. Error Correcting Codes The Hamming Codes. Priti Shankar. Series Article Volume 2 Issue 1 January ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  1. "Hour of Code": Can It Change Students' Attitudes toward Programming?

    Science.gov (United States)

    Du, Jie; Wimmer, Hayden; Rada, Roy

    2016-01-01

    The Hour of Code is a one-hour introduction to computer science organized by Code.org, a non-profit dedicated to expanding participation in computer science. This study investigated the impact of the Hour of Code on students' attitudes towards computer programming and their knowledge of programming. A sample of undergraduate students from two…

  2. Evaluation Codes from an Affine Veriety Code Perspective

    DEFF Research Database (Denmark)

    Geil, Hans Olav

    2008-01-01

    Evaluation codes (also called order domain codes) are traditionally introduced as generalized one-point geometric Goppa codes. In the present paper we will give a new point of view on evaluation codes by introducing them instead as particular nice examples of affine variety codes. Our study...... . . . . . . . . . . . . . . . . . . . . . . . 171 4.9 Codes form order domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.10 One-point geometric Goppa codes . . . . . . . . . . . . . . . . . . . . . . . . 176 4.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 References...

  3. Quantum Synchronizable Codes From Quadratic Residue Codes and Their Supercodes

    OpenAIRE

    Xie, Yixuan; Yuan, Jinhong; Fujiwara, Yuichiro

    2014-01-01

    Quantum synchronizable codes are quantum error-correcting codes designed to correct the effects of both quantum noise and block synchronization errors. While it is known that quantum synchronizable codes can be constructed from cyclic codes that satisfy special properties, only a few classes of cyclic codes have been proved to give promising quantum synchronizable codes. In this paper, using quadratic residue codes and their supercodes, we give a simple construction for quantum synchronizable...

  4. Research on Primary Shielding Calculation Source Generation Codes

    Directory of Open Access Journals (Sweden)

    Zheng Zheng

    2017-01-01

    Full Text Available Primary Shielding Calculation (PSC plays an important role in reactor shielding design and analysis. In order to facilitate PSC, a source generation code is developed to generate cumulative distribution functions (CDF for the source particle sample code of the J Monte Carlo Transport (JMCT code, and a source particle sample code is deveoped to sample source particle directions, types, coordinates, energy and weights from the CDFs. A source generation code is developed to transform three dimensional (3D power distributions in xyz geometry to source distributions in r θ z geometry for the J Discrete Ordinate Transport (JSNT code. Validation on PSC model of Qinshan No.1 nuclear power plant (NPP, CAP1400 and CAP1700 reactors are performed. Numerical results show that the theoretical model and the codes are both correct.

  5. Research on Primary Shielding Calculation Source Generation Codes

    Science.gov (United States)

    Zheng, Zheng; Mei, Qiliang; Li, Hui; Shangguan, Danhua; Zhang, Guangchun

    2017-09-01

    Primary Shielding Calculation (PSC) plays an important role in reactor shielding design and analysis. In order to facilitate PSC, a source generation code is developed to generate cumulative distribution functions (CDF) for the source particle sample code of the J Monte Carlo Transport (JMCT) code, and a source particle sample code is deveoped to sample source particle directions, types, coordinates, energy and weights from the CDFs. A source generation code is developed to transform three dimensional (3D) power distributions in xyz geometry to source distributions in r θ z geometry for the J Discrete Ordinate Transport (JSNT) code. Validation on PSC model of Qinshan No.1 nuclear power plant (NPP), CAP1400 and CAP1700 reactors are performed. Numerical results show that the theoretical model and the codes are both correct.

  6. Pyramid image codes

    Science.gov (United States)

    Watson, Andrew B.

    1990-01-01

    All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.

  7. Report number codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.N. (ed.)

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  8. Cryptography cracking codes

    CERN Document Server

    2014-01-01

    While cracking a code might seem like something few of us would encounter in our daily lives, it is actually far more prevalent than we may realize. Anyone who has had personal information taken because of a hacked email account can understand the need for cryptography and the importance of encryption-essentially the need to code information to keep it safe. This detailed volume examines the logic and science behind various ciphers, their real world uses, how codes can be broken, and the use of technology in this oft-overlooked field.

  9. Quantum coding theorems

    Science.gov (United States)

    Holevo, A. S.

    1998-12-01

    ContentsI. IntroductionII. General considerations § 1. Quantum communication channel § 2. Entropy bound and channel capacity § 3. Formulation of the quantum coding theorem. Weak conversionIII. Proof of the direct statement of the coding theorem § 1. Channels with pure signal states § 2. Reliability function § 3. Quantum binary channel § 4. Case of arbitrary states with bounded entropyIV. c-q channels with input constraints § 1. Coding theorem § 2. Gauss channel with one degree of freedom § 3. Classical signal on quantum background noise Bibliography

  10. Abstracts of digital computer code packages. Assembled by the Radiation Shielding Information Center. [Radiation transport codes

    Energy Technology Data Exchange (ETDEWEB)

    McGill, B.; Maskewitz, B.F.; Anthony, C.M.; Comolander, H.E.; Hendrickson, H.R.

    1976-01-01

    The term ''code package'' is used to describe a miscellaneous grouping of materials which, when interpreted in connection with a digital computer, enables the scientist--user to solve technical problems in the area for which the material was designed. In general, a ''code package'' consists of written material--reports, instructions, flow charts, listings of data, and other useful material and IBM card decks (or, more often, a reel of magnetic tape) on which the source decks, sample problem input (including libraries of data) and the BCD/EBCDIC output listing from the sample problem are written. In addition to the main code, and any available auxiliary routines are also included. The abstract format was chosen to give to a potential code user several criteria for deciding whether or not he wishes to request the code package. (RWR)

  11. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    Science.gov (United States)

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Fulcrum Network Codes

    DEFF Research Database (Denmark)

    2015-01-01

    Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....

  13. VT ZIP Code Areas

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) A ZIP Code Tabulation Area (ZCTA) is a statistical geographic entity that approximates the delivery area for a U.S. Postal Service five-digit...

  14. Bandwidth efficient coding

    CERN Document Server

    Anderson, John B

    2017-01-01

    Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.

  15. OCA Code Enforcement

    Data.gov (United States)

    Montgomery County of Maryland — The Office of the County Attorney (OCA) processes Code Violation Citations issued by County agencies. The citations can be viewed by issued department, issued date...

  16. Coded Random Access

    DEFF Research Database (Denmark)

    Paolini, Enrico; Stefanovic, Cedomir; Liva, Gianluigi

    2015-01-01

    , in which the structure of the access protocol can be mapped to a structure of an erasure-correcting code defined on graph. This opens the possibility to use coding theory and tools for designing efficient random access protocols, offering markedly better performance than ALOHA. Several instances of coded......The rise of machine-to-machine communications has rekindled the interest in random access protocols as a support for a massive number of uncoordinatedly transmitting devices. The legacy ALOHA approach is developed under a collision model, where slots containing collided packets are considered...... as waste. However, if the common receiver (e.g., base station) is capable to store the collision slots and use them in a transmission recovery process based on successive interference cancellation, the design space for access protocols is radically expanded. We present the paradigm of coded random access...

  17. Code Disentanglement: Initial Plan

    Energy Technology Data Exchange (ETDEWEB)

    Wohlbier, John Greaton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelley, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rockefeller, Gabriel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Calef, Matthew Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-01-27

    The first step to making more ambitious changes in the EAP code base is to disentangle the code into a set of independent, levelized packages. We define a package as a collection of code, most often across a set of files, that provides a defined set of functionality; a package a) can be built and tested as an entity and b) fits within an overall levelization design. Each package contributes one or more libraries, or an application that uses the other libraries. A package set is levelized if the relationships between packages form a directed, acyclic graph and each package uses only packages at lower levels of the diagram (in Fortran this relationship is often describable by the use relationship between modules). Independent packages permit independent- and therefore parallel|development. The packages form separable units for the purposes of development and testing. This is a proven path for enabling finer-grained changes to a complex code.

  18. The fast code

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)

    1996-09-01

    The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)

  19. Coded Splitting Tree Protocols

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar

    2013-01-01

    This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... as possible. Evaluations show that the proposed protocol provides considerable gains over the standard tree splitting protocol applying SIC. The improvement comes at the expense of an increased feedback and receiver complexity....

  20. Code de conduite

    International Development Research Centre (IDRC) Digital Library (Canada)

    irocca

    son point de vue, dans un esprit d'accueil et de respect. NOTRE CODE DE CONDUITE. Le CRDI s'engage à adopter un comportement conforme aux normes d'éthique les plus strictes dans toutes ses activités. Le Code de conduite reflète notre mission, notre philosophie en matière d'emploi et les résultats des discussions ...

  1. Open Coding Descriptions

    Directory of Open Access Journals (Sweden)

    Barney G. Glaser, PhD, Hon PhD

    2016-12-01

    Full Text Available Open coding is a big source of descriptions that must be managed and controlled when doing GT research. The goal of generating a GT is to generate an emergent set of concepts and their properties that fit and work with relevancy to be integrated into a theory. To achieve this goal, the researcher begins his research with open coding, that is coding all his data in every possible way. The consequence of this open coding is a multitude of descriptions for possible concepts that often do not fit in the emerging theory. Thus in this case the researcher ends up with many irrelevant descriptions for concepts that do not apply. To dwell on descriptions for inapplicable concepts ruins the GT theory as it starts. It is hard to stop. Confusion easily sets in. Switching the study to a QDA is a simple rescue. Rigorous focusing on emerging concepts is vital before being lost in open coding descriptions. It is important, no matter how interesting the description may become. Once a core is possible, selective coding can start which will help control against being lost in multiple descriptions.

  2. Some new ternary linear codes

    Directory of Open Access Journals (Sweden)

    Rumen Daskalov

    2017-07-01

    Full Text Available Let an $[n,k,d]_q$ code be a linear code of length $n$, dimension $k$ and minimum Hamming distance $d$ over $GF(q$. One of the most important problems in coding theory is to construct codes with optimal minimum distances. In this paper 22 new ternary linear codes are presented. Two of them are optimal. All new codes improve the respective lower bounds in [11].

  3. Temporal Coding of Volumetric Imagery

    Science.gov (United States)

    Llull, Patrick Ryan

    'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration

  4. Code blue: seizures.

    Science.gov (United States)

    Hoerth, Matthew T; Drazkowski, Joseph F; Noe, Katherine H; Sirven, Joseph I

    2011-06-01

    Eyewitnesses frequently perceive seizures as life threatening. If an event occurs on the hospital premises, a "code blue" can be called which consumes considerable resources. The purpose of this study was to determine the frequency and characteristics of code blue calls for seizures and seizure mimickers. A retrospective review of a code blue log from 2001 through 2008 identified 50 seizure-like events, representing 5.3% of all codes. Twenty-eight (54%) occurred in inpatients; the other 22 (44%) events involved visitors or employees on the hospital premises. Eighty-six percent of the events were epileptic seizures. Seizure mimickers, particularly psychogenic nonepileptic seizures, were more common in the nonhospitalized group. Only five (17.9%) inpatients had a known diagnosis of epilepsy, compared with 17 (77.3%) of the nonhospitalized patients. This retrospective survey provides insights into how code blues are called on hospitalized versus nonhospitalized patients for seizure-like events. Copyright © 2011. Published by Elsevier Inc.

  5. Error coding simulations

    Science.gov (United States)

    Noble, Viveca K.

    1993-11-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  6. Twisted Reed-Solomon Codes

    DEFF Research Database (Denmark)

    Beelen, Peter; Puchinger, Sven; Rosenkilde ne Nielsen, Johan

    2017-01-01

    We present a new general construction of MDS codes over a finite field Fq. We describe two explicit subclasses which contain new MDS codes of length at least q/2 for all values of q ≥ 11. Moreover, we show that most of the new codes are not equivalent to a Reed-Solomon code.......We present a new general construction of MDS codes over a finite field Fq. We describe two explicit subclasses which contain new MDS codes of length at least q/2 for all values of q ≥ 11. Moreover, we show that most of the new codes are not equivalent to a Reed-Solomon code....

  7. Optical coding theory with Prime

    CERN Document Server

    Kwong, Wing C

    2013-01-01

    Although several books cover the coding theory of wireless communications and the hardware technologies and coding techniques of optical CDMA, no book has been specifically dedicated to optical coding theory-until now. Written by renowned authorities in the field, Optical Coding Theory with Prime gathers together in one volume the fundamentals and developments of optical coding theory, with a focus on families of prime codes, supplemented with several families of non-prime codes. The book also explores potential applications to coding-based optical systems and networks. Learn How to Construct

  8. Manufacturer Identification Code (MID) - ACE

    Data.gov (United States)

    Department of Homeland Security — The ACE Manufacturer Identification Code (MID) application is used to track and control identifications codes for manufacturers. A manufacturer is identified on an...

  9. Algebraic and stochastic coding theory

    CERN Document Server

    Kythe, Dave K

    2012-01-01

    Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.

  10. Speech coding code- excited linear prediction

    CERN Document Server

    Bäckström, Tom

    2017-01-01

    This book provides scientific understanding of the most central techniques used in speech coding both for advanced students as well as professionals with a background in speech audio and or digital signal processing. It provides a clear connection between the whys hows and whats thus enabling a clear view of the necessity purpose and solutions provided by various tools as well as their strengths and weaknesses in each respect Equivalently this book sheds light on the following perspectives for each technology presented Objective What do we want to achieve and especially why is this goal important Resource Information What information is available and how can it be useful and Resource Platform What kind of platforms are we working with and what are their capabilities restrictions This includes computational memory and acoustic properties and the transmission capacity of devices used. The book goes on to address Solutions Which solutions have been proposed and how can they be used to reach the stated goals and ...

  11. Code query by example

    Science.gov (United States)

    Vaucouleur, Sebastien

    2011-02-01

    We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.

  12. Graph Codes with Reed-Solomon Component Codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn

    2006-01-01

    We treat a specific case of codes based on bipartite expander graphs coming from finite geometries. The code symbols are associated with the branches and the symbols connected to a given node are restricted to be codewords in a Reed-Solomon code. We give results on the parameters of the codes...

  13. The EGS5 Code System

    Energy Technology Data Exchange (ETDEWEB)

    Hirayama, Hideo; Namito, Yoshihito; /KEK, Tsukuba; Bielajew, Alex F.; Wilderman, Scott J.; U., Michigan; Nelson, Walter R.; /SLAC

    2005-12-20

    , a deliberate attempt was made to present example problems in order to help the user ''get started'', and we follow that spirit in this report. A series of elementary tutorial user codes are presented in Chapter 3, with more sophisticated sample user codes described in Chapter 4. Novice EGS users will find it helpful to read through the initial sections of the EGS5 User Manual (provided in Appendix B of this report), proceeding then to work through the tutorials in Chapter 3. The User Manuals and other materials found in the appendices contain detailed flow charts, variable lists, and subprogram descriptions of EGS5 and PEGS. Included are step-by-step instructions for developing basic EGS5 user codes and for accessing all of the physics options available in EGS5 and PEGS. Once acquainted with the basic structure of EGS5, users should find the appendices the most frequently consulted sections of this report.

  14. Code of Medical Ethics

    Directory of Open Access Journals (Sweden)

    . SZD-SZZ

    2017-03-01

    Full Text Available Te Code was approved on December 12, 1992, at the 3rd regular meeting of the General Assembly of the Medical Chamber of Slovenia and revised on April 24, 1997, at the 27th regular meeting of the General Assembly of the Medical Chamber of Slovenia. The Code was updated and harmonized with the Medical Association of Slovenia and approved on October 6, 2016, at the regular meeting of the General Assembly of the Medical Chamber of Slovenia.

  15. Physical Layer Network Coding

    DEFF Research Database (Denmark)

    Fukui, Hironori; Yomo, Hironori; Popovski, Petar

    2013-01-01

    Physical layer network coding (PLNC) has the potential to improve throughput of multi-hop networks. However, most of the works are focused on the simple, three-node model with two-way relaying, not taking into account the fact that there can be other neighboring nodes that can cause/receive inter......Physical layer network coding (PLNC) has the potential to improve throughput of multi-hop networks. However, most of the works are focused on the simple, three-node model with two-way relaying, not taking into account the fact that there can be other neighboring nodes that can cause...

  16. Principles of speech coding

    CERN Document Server

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  17. Securing mobile code.

    Energy Technology Data Exchange (ETDEWEB)

    Link, Hamilton E.; Schroeppel, Richard Crabtree; Neumann, William Douglas; Campbell, Philip LaRoche; Beaver, Cheryl Lynn; Pierson, Lyndon George; Anderson, William Erik

    2004-10-01

    If software is designed so that the software can issue functions that will move that software from one computing platform to another, then the software is said to be 'mobile'. There are two general areas of security problems associated with mobile code. The 'secure host' problem involves protecting the host from malicious mobile code. The 'secure mobile code' problem, on the other hand, involves protecting the code from malicious hosts. This report focuses on the latter problem. We have found three distinct camps of opinions regarding how to secure mobile code. There are those who believe special distributed hardware is necessary, those who believe special distributed software is necessary, and those who believe neither is necessary. We examine all three camps, with a focus on the third. In the distributed software camp we examine some commonly proposed techniques including Java, D'Agents and Flask. For the specialized hardware camp, we propose a cryptographic technique for 'tamper-proofing' code over a large portion of the software/hardware life cycle by careful modification of current architectures. This method culminates by decrypting/authenticating each instruction within a physically protected CPU, thereby protecting against subversion by malicious code. Our main focus is on the camp that believes that neither specialized software nor hardware is necessary. We concentrate on methods of code obfuscation to render an entire program or a data segment on which a program depends incomprehensible. The hope is to prevent or at least slow down reverse engineering efforts and to prevent goal-oriented attacks on the software and execution. The field of obfuscation is still in a state of development with the central problem being the lack of a basis for evaluating the protection schemes. We give a brief introduction to some of the main ideas in the field, followed by an in depth analysis of a technique called &apos

  18. Mal-Xtract: Hidden Code Extraction using Memory Analysis

    Science.gov (United States)

    Lim, Charles; Syailendra Kotualubun, Yohanes; Suryadi; Ramli, Kalamullah

    2017-01-01

    Software packer has been used effectively to hide the original code inside a binary executable, making it more difficult for existing signature based anti malware software to detect malicious code inside the executable. A new method of written and rewritten memory section is introduced to to detect the exact end time of unpacking routine and extract original code from packed binary executable using Memory Analysis running in an software emulated environment. Our experiment results show that at least 97% of the original code from the various binary executable packed with different software packers could be extracted. The proposed method has also been successfully extracted hidden code from recent malware family samples.

  19. Ptolemy Coding Style

    Science.gov (United States)

    2014-09-05

    because this would combine Ptolemy II with the GPL’d code and thus encumber Ptolemy II with the GPL. Another GNU license is the GNU Library General...permission on the source.eecs.berkeley.edu repositories, then use your local repository. bash-3.2$ svn co svn+ ssh ://source.eecs.berkeley.edu/chess

  20. Error Correcting Codes

    Indian Academy of Sciences (India)

    The images, which came from Oailleo's flyby of the moon on June 26-27. 1996 are reported to be 20 times better than those obtained from the Voyager. Priti Shankar .... a systematic way. Thus was born a brand new field,which has since been ..... mathematically oriented, compact book on coding, containing a few topics not ...

  1. Ready, steady… Code!

    CERN Multimedia

    Anaïs Schaeffer

    2013-01-01

    This summer, CERN took part in the Google Summer of Code programme for the third year in succession. Open to students from all over the world, this programme leads to very successful collaborations for open source software projects.   Image: GSoC 2013. Google Summer of Code (GSoC) is a global programme that offers student developers grants to write code for open-source software projects. Since its creation in 2005, the programme has brought together some 6,000 students from over 100 countries worldwide. The students selected by Google are paired with a mentor from one of the participating projects, which can be led by institutes, organisations, companies, etc. This year, CERN PH Department’s SFT (Software Development for Experiments) Group took part in the GSoC programme for the third time, submitting 15 open-source projects. “Once published on the Google Summer for Code website (in April), the projects are open to applications,” says Jakob Blomer, one of the o...

  2. (Almost) practical tree codes

    KAUST Repository

    Khina, Anatoly

    2016-08-15

    We consider the problem of stabilizing an unstable plant driven by bounded noise over a digital noisy communication link, a scenario at the heart of networked control. To stabilize such a plant, one needs real-time encoding and decoding with an error probability profile that decays exponentially with the decoding delay. The works of Schulman and Sahai over the past two decades have developed the notions of tree codes and anytime capacity, and provided the theoretical framework for studying such problems. Nonetheless, there has been little practical progress in this area due to the absence of explicit constructions of tree codes with efficient encoding and decoding algorithms. Recently, linear time-invariant tree codes were proposed to achieve the desired result under maximum-likelihood decoding. In this work, we take one more step towards practicality, by showing that these codes can be efficiently decoded using sequential decoding algorithms, up to some loss in performance (and with some practical complexity caveats). We supplement our theoretical results with numerical simulations that demonstrate the effectiveness of the decoder in a control system setting.

  3. New code of conduct

    CERN Multimedia

    Laëtitia Pedroso

    2010-01-01

    During his talk to the staff at the beginning of the year, the Director-General mentioned that a new code of conduct was being drawn up. What exactly is it and what is its purpose? Anne-Sylvie Catherin, Head of the Human Resources (HR) Department, talked to us about the whys and wherefores of the project.   Drawing by Georges Boixader from the cartoon strip “The World of Particles” by Brian Southworth. A code of conduct is a general framework laying down the behaviour expected of all members of an organisation's personnel. “CERN is one of the very few international organisations that don’t yet have one", explains Anne-Sylvie Catherin. “We have been thinking about introducing a code of conduct for a long time but lacked the necessary resources until now”. The call for a code of conduct has come from different sources within the Laboratory. “The Equal Opportunities Advisory Panel (read also the "Equal opportuni...

  4. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 10. Error Correcting Codes How Numbers Protect Themselves. Priti Shankar. Series Article Volume 1 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  5. Video Coding for ESL.

    Science.gov (United States)

    King, Kevin

    1992-01-01

    Coding tasks, a valuable technique for teaching English as a Second Language, are presented that enable students to look at patterns and structures of marital communication as well as objectively evaluate the degree of happiness or distress in the marriage. (seven references) (JL)

  6. Physical layer network coding

    DEFF Research Database (Denmark)

    Fukui, Hironori; Popovski, Petar; Yomo, Hiroyuki

    2014-01-01

    Physical layer network coding (PLNC) has been proposed to improve throughput of the two-way relay channel, where two nodes communicate with each other, being assisted by a relay node. Most of the works related to PLNC are focused on a simple three-node model and they do not take into account...

  7. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    Computer Science and. Automation, liSe. Their research addresses various aspects of algebraic and combinatorial coding theory. 1 low Density Parity Check ..... lustrating how the variable Xd is decoded. As mentioned earlier, this algorithm runs iteratively. To start with, in the first iteration, only bits in the first level of the ...

  8. Broadcast Coded Slotted ALOHA

    DEFF Research Database (Denmark)

    Ivanov, Mikhail; Brännström, Frederik; Graell i Amat, Alexandre

    2016-01-01

    We propose an uncoordinated medium access control (MAC) protocol, called all-to-all broadcast coded slotted ALOHA (B-CSA) for reliable all-to-all broadcast with strict latency constraints. In B-CSA, each user acts as both transmitter and receiver in a half-duplex mode. The half-duplex mode gives...

  9. Student Dress Codes.

    Science.gov (United States)

    Uerling, Donald F.

    School officials see a need for regulations that prohibit disruptive and inappropriate forms of expression and attire; students see these regulations as unwanted restrictions on their freedom. This paper reviews court litigation involving constitutional limitations on school authority, dress and hair codes, state law constraints, and school…

  10. Dress Codes and Uniforms.

    Science.gov (United States)

    Lumsden, Linda; Miller, Gabriel

    2002-01-01

    Students do not always make choices that adults agree with in their choice of school dress. Dress-code issues are explored in this Research Roundup, and guidance is offered to principals seeking to maintain a positive school climate. In "Do School Uniforms Fit?" Kerry White discusses arguments for and against school uniforms and summarizes the…

  11. Dress Codes. Legal Brief.

    Science.gov (United States)

    Zirkel, Perry A.

    2000-01-01

    As illustrated by two recent decisions, the courts in the past decade have demarcated wide boundaries for school officials considering dress codes, whether in the form of selective prohibitions or required uniforms. Administrators must warn the community, provide legitimate justification and reasonable clarity, and comply with state law. (MLH)

  12. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    titled 'A Mathematical Theory of Communication' in the Bell Systems Technical Journal in 1948. The paper set up a ... 'existential' result but nota 'constructive' one. The construction of such a code evolved from the work ... several papers on hyperbolic geometry. He shifted to the Department of Pure Mathematics at Calcutta.

  13. Cracking the Codes

    Science.gov (United States)

    Heathcote, Dorothy

    1978-01-01

    Prescribes an attitude that teachers can take to help students "crack the code" of a dramatic work, combining a flexible teaching strategy, the suspension of beliefs or preconceived notions about the work, focusing on the drams's text, and choosing a reading strategy appropriate to the dramatic work. (RL)

  14. Corporate governance through codes

    NARCIS (Netherlands)

    Haxhi, I.; Aguilera, R.V.; Vodosek, M.; den Hartog, D.; McNett, J.M.

    2014-01-01

    The UK's 1992 Cadbury Report defines corporate governance (CG) as the system by which businesses are directed and controlled. CG codes are a set of best practices designed to address deficiencies in the formal contracts and institutions by suggesting prescriptions on the preferred role and

  15. Coded SQUID arrays

    NARCIS (Netherlands)

    Podt, M.; Weenink, J.; Weenink, J.; Flokstra, Jakob; Rogalla, Horst

    2001-01-01

    We report on a superconducting quantum interference device (SQUID) system to read out large arrays of cryogenic detectors. In order to reduce the number of SQUIDs required for an array of these detectors, we used code-division multiplexing. This simplifies the electronics because of a significantly

  16. Reed-Solomon convolutional codes

    NARCIS (Netherlands)

    Gluesing-Luerssen, H; Schmale, W

    2005-01-01

    In this paper we will introduce a specific class of cyclic convolutional codes. The construction is based on Reed-Solomon block codes. The algebraic parameters as well as the distance of these codes are determined. This shows that some of these codes are optimal or near optimal.

  17. Causation, constructors and codes.

    Science.gov (United States)

    Hofmeyr, Jan-Hendrik S

    2017-09-13

    Relational biology relies heavily on the enriched understanding of causal entailment that Robert Rosen's formalisation of Aristotle's four causes has made possible, although to date efficient causes and the rehabilitation of final cause have been its main focus. Formal cause has been paid rather scant attention, but, as this paper demonstrates, is crucial to our understanding of many types of processes, not necessarily biological. The graph-theoretic relational diagram of a mapping has played a key role in relational biology, and the first part of the paper is devoted to developing an explicit representation of formal cause in the diagram and how it acts in combination with efficient cause to form a mapping. I then use these representations to show how Von Neumann's universal constructor can be cast into a relational diagram in a way that avoids the logical paradox that Rosen detected in his own representation of the constructor in terms of sets and mappings. One aspect that was absent from both Von Neumann's and Rosen's treatments was the necessity of a code to translate the description (the formal cause) of the automaton to be constructed into the construction process itself. A formal definition of codes in general, and organic codes in particular, allows the relational diagram to be extended so as to capture this translation of formal cause into process. The extended relational diagram is used to exemplify causal entailment in a diverse range of processes, such as enzyme action, construction of automata, communication through the Morse code, and ribosomal polypeptide synthesis through the genetic code. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Coded diffraction system in X-ray crystallography using a boolean phase coded aperture approximation

    Science.gov (United States)

    Pinilla, Samuel; Poveda, Juan; Arguello, Henry

    2018-03-01

    Phase retrieval is a problem present in many applications such as optics, astronomical imaging, computational biology and X-ray crystallography. Recent work has shown that the phase can be better recovered when the acquisition architecture includes a coded aperture, which modulates the signal before diffraction, such that the underlying signal is recovered from coded diffraction patterns. Moreover, this type of modulation effect, before the diffraction operation, can be obtained using a phase coded aperture, just after the sample under study. However, a practical implementation of a phase coded aperture in an X-ray application is not feasible, because it is computationally modeled as a matrix with complex entries which requires changing the phase of the diffracted beams. In fact, changing the phase implies finding a material that allows to deviate the direction of an X-ray beam, which can considerably increase the implementation costs. Hence, this paper describes a low cost coded X-ray diffraction system based on block-unblock coded apertures that enables phase reconstruction. The proposed system approximates the phase coded aperture with a block-unblock coded aperture by using the detour-phase method. Moreover, the SAXS/WAXS X-ray crystallography software was used to simulate the diffraction patterns of a real crystal structure called Rhombic Dodecahedron. Additionally, several simulations were carried out to analyze the performance of block-unblock approximations in recovering the phase, using the simulated diffraction patterns. Furthermore, the quality of the reconstructions was measured in terms of the Peak Signal to Noise Ratio (PSNR). Results show that the performance of the block-unblock phase coded apertures approximation decreases at most 12.5% compared with the phase coded apertures. Moreover, the quality of the reconstructions using the boolean approximations is up to 2.5 dB of PSNR less with respect to the phase coded aperture reconstructions.

  19. Universal Features for the Classification of Coding and Non-Coding DNA Sequences

    OpenAIRE

    Nicolas Carels; Ramon Vidal; Diego Frías

    2009-01-01

    In this report, we revisited simple features that allow the classification of coding sequences (CDS) from non-coding DNA. The spectrum of codon usage of our sequence sample is large and suggests that these features are universal. The features that we investigated combine (i) the stop codon distribution, (ii) the product of purine probabilities in the three positions of nucleotide triplets, (iii) the product of Cytosine, Guanine, Adenine probabilities in 1st, 2nd, 3rd position of triplets, res...

  20. Essential idempotents and simplex codes

    Directory of Open Access Journals (Sweden)

    Gladys Chalom

    2017-01-01

    Full Text Available We define essential idempotents in group algebras and use them to prove that every mininmal abelian non-cyclic code is a repetition code. Also we use them to prove that every minimal abelian code is equivalent to a minimal cyclic code of the same length. Finally, we show that a binary cyclic code is simplex if and only if is of length of the form $n=2^k-1$ and is generated by an essential idempotent.

  1. Coding Theory and Projective Spaces

    Science.gov (United States)

    Silberstein, Natalia

    2008-05-01

    The projective space of order n over a finite field F_q is a set of all subspaces of the vector space F_q^{n}. In this work, we consider error-correcting codes in the projective space, focusing mainly on constant dimension codes. We start with the different representations of subspaces in the projective space. These representations involve matrices in reduced row echelon form, associated binary vectors, and Ferrers diagrams. Based on these representations, we provide a new formula for the computation of the distance between any two subspaces in the projective space. We examine lifted maximum rank distance (MRD) codes, which are nearly optimal constant dimension codes. We prove that a lifted MRD code can be represented in such a way that it forms a block design known as a transversal design. The incidence matrix of the transversal design derived from a lifted MRD code can be viewed as a parity-check matrix of a linear code in the Hamming space. We find the properties of these codes which can be viewed also as LDPC codes. We present new bounds and constructions for constant dimension codes. First, we present a multilevel construction for constant dimension codes, which can be viewed as a generalization of a lifted MRD codes construction. This construction is based on a new type of rank-metric codes, called Ferrers diagram rank-metric codes. Then we derive upper bounds on the size of constant dimension codes which contain the lifted MRD code, and provide a construction for two families of codes, that attain these upper bounds. We generalize the well-known concept of a punctured code for a code in the projective space to obtain large codes which are not constant dimension. We present efficient enumerative encoding and decoding techniques for the Grassmannian. Finally we describe a search method for constant dimension lexicodes.

  2. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...

  3. Entanglement-assisted quantum MDS codes constructed from negacyclic codes

    Science.gov (United States)

    Chen, Jianzhang; Huang, Yuanyuan; Feng, Chunhui; Chen, Riqing

    2017-12-01

    Recently, entanglement-assisted quantum codes have been constructed from cyclic codes by some scholars. However, how to determine the number of shared pairs required to construct entanglement-assisted quantum codes is not an easy work. In this paper, we propose a decomposition of the defining set of negacyclic codes. Based on this method, four families of entanglement-assisted quantum codes constructed in this paper satisfy the entanglement-assisted quantum Singleton bound, where the minimum distance satisfies q+1 ≤ d≤ n+2/2. Furthermore, we construct two families of entanglement-assisted quantum codes with maximal entanglement.

  4. Coding vs non-coding: Translatability of short ORFs found in putative non-coding transcripts.

    Science.gov (United States)

    Kageyama, Yuji; Kondo, Takefumi; Hashimoto, Yoshiko

    2011-11-01

    Genome analysis has identified a number of putative non-protein-coding transcripts that do not contain ORFs longer than 100 codons. Although evidence strongly suggests that non-coding RNAs are important in a variety of biological phenomena, the discovery of small peptide-coding mRNAs confirms that some transcripts that have been assumed to be non-coding actually have coding potential. Their abundance and importance in biological phenomena makes the sorting of non-coding RNAs from small peptide-coding mRNAs a key issue in functional genomics. However, validating the coding potential of small peptide-coding RNAs is complicated, because their ORF sequences are usually too short for computational analysis. In this review, we discuss computational and experimental methods for validating the translatability of these non-coding RNAs. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  5. Codes of Good Governance

    DEFF Research Database (Denmark)

    Beck Jørgensen, Torben; Sørensen, Ditte-Lene

    2013-01-01

    Good governance is a broad concept used by many international organizations to spell out how states or countries should be governed. Definitions vary, but there is a clear core of common public values, such as transparency, accountability, effectiveness, and the rule of law. It is quite likely......, however, that national views of good governance reflect different political cultures and institutional heritages. Fourteen national codes of conduct are analyzed. The findings suggest that public values converge and that they match model codes from the United Nations and the European Council as well...... as conceptions of good governance from other international organizations. While values converge, they are balanced and communicated differently, and seem to some extent to be translated into the national cultures. The set of global public values derived from this analysis include public interest, regime dignity...

  6. Synthetic histone code.

    Science.gov (United States)

    Fischle, Wolfgang; Mootz, Henning D; Schwarzer, Dirk

    2015-10-01

    Chromatin is the universal template of genetic information in all eukaryotic cells. This complex of DNA and histone proteins not only packages and organizes genomes but also regulates gene expression. A multitude of posttranslational histone modifications and their combinations are thought to constitute a code for directing distinct structural and functional states of chromatin. Methods of protein chemistry, including protein semisynthesis, amber suppression technology, and cysteine bioconjugation, have enabled the generation of so-called designer chromatin containing histones in defined and homogeneous modification states. Several of these approaches have matured from proof-of-concept studies into efficient tools and technologies for studying the biochemistry of chromatin regulation and for interrogating the histone code. We summarize pioneering experiments and recent developments in this exciting field of chemical biology. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Efficient convolutional sparse coding

    Science.gov (United States)

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  8. Status of MARS Code

    Energy Technology Data Exchange (ETDEWEB)

    N.V. Mokhov

    2003-04-09

    Status and recent developments of the MARS 14 Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electronvolt up to 100 TeV are described. these include physics models both in strong and electromagnetic interaction sectors, variance reduction techniques, residual dose, geometry, tracking, histograming. MAD-MARS Beam Line Build and Graphical-User Interface.

  9. Hydra Code Release

    OpenAIRE

    Couchman, H. M. P.; Pearce, F. R.; Thomas, P. A.

    1996-01-01

    Comment: A new version of the AP3M-SPH code, Hydra, is now available as a tar file from the following sites; http://coho.astro.uwo.ca/pub/hydra/hydra.html , http://star-www.maps.susx.ac.uk/~pat/hydra/hydra.html . The release now also contains a cosmological initial conditions generator, documentation, an installation guide and installation tests. A LaTex version of the documentation is included here

  10. Adaptive Hybrid Picture Coding.

    Science.gov (United States)

    1983-02-05

    process, namely displacement or motion detection and estimation. DWSPLACEENT AD MOTION Simply stated, motion is defined to be a time series of spatial...regressive model in that the prediction is made with respect to a time series . That is future values of a time series are to be predicted on...B8 - 90. Robbins, John D., and Netravali, Arun N., "Interframe Telivision Coding Using Movement Compensation," Internation Conference on

  11. MELCOR computer code manuals

    Energy Technology Data Exchange (ETDEWEB)

    Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L. [Sandia National Labs., Albuquerque, NM (United States); Hodge, S.A.; Hyman, C.R.; Sanders, R.L. [Oak Ridge National Lab., TN (United States)

    1995-03-01

    MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.

  12. Code Modernization of VPIC

    Science.gov (United States)

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  13. On Some Ternary LCD Codes

    OpenAIRE

    Darkunde, Nitin S.; Patil, Arunkumar R.

    2018-01-01

    The main aim of this paper is to study $LCD$ codes. Linear code with complementary dual($LCD$) are those codes which have their intersection with their dual code as $\\{0\\}$. In this paper we will give rather alternative proof of Massey's theorem\\cite{8}, which is one of the most important characterization of $LCD$ codes. Let $LCD[n,k]_3$ denote the maximum of possible values of $d$ among $[n,k,d]$ ternary $LCD$ codes. In \\cite{4}, authors have given upper bound on $LCD[n,k]_2$ and extended th...

  14. Design of convolutional tornado code

    Science.gov (United States)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  15. Allele coding in genomic evaluation

    DEFF Research Database (Denmark)

    Standen, Ismo; Christensen, Ole Fredslund

    2011-01-01

    this centered allele coding. This study considered effects of different allele coding methods on inference. Both marker-based and equivalent models were considered, and restricted maximum likelihood and Bayesian methods were used in inference. \\paragraph*{Results:} Theoretical derivations showed that parameter...... coding methods imply different models. Finally, allele coding affects the mixing of Markov chain Monte Carlo algorithms, with the centered coding being the best. \\paragraph*{Conclusions:} Different allele coding methods lead to the same inference in the marker-based and equivalent models when a fixed...

  16. Polynomial weights and code constructions.

    Science.gov (United States)

    Massey, J. L.; Costello, D. J., Jr.; Justesen, J.

    1973-01-01

    Study of certain polynomials with the 'weight-retaining' property that any linear combination of these polynomials with coefficients in a general finite field has Hamming weight at least as great as that of the minimum-degree polynomial included. This fundamental property is used in applications to Reed-Muller codes, a new class of 'repeated-root' binary cyclic codes, two new classes of binary convolutional codes derived from binary cyclic codes, and two new classes of binary convolutional codes derived from Reed-Solomon codes.

  17. Construction of new quantum MDS codes derived from constacyclic codes

    Science.gov (United States)

    Taneja, Divya; Gupta, Manish; Narula, Rajesh; Bhullar, Jaskaran

    Obtaining quantum maximum distance separable (MDS) codes from dual containing classical constacyclic codes using Hermitian construction have paved a path to undertake the challenges related to such constructions. Using the same technique, some new parameters of quantum MDS codes have been constructed here. One set of parameters obtained in this paper has achieved much larger distance than work done earlier. The remaining constructed parameters of quantum MDS codes have large minimum distance and were not explored yet.

  18. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Thommesen, Christian

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes.......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes....

  19. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Thommesen, Christian; Høholdt, Tom

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved Reed/Solomon codes, which allows close to errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes. (NK) N-K......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved Reed/Solomon codes, which allows close to errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes. (NK) N-K...

  20. New Code Matched Interleaver for Turbo Codes with Short Frames

    Directory of Open Access Journals (Sweden)

    LAZAR, G. A.

    2010-02-01

    Full Text Available Turbo codes are a parallel concatenation of two or more convolutional codes, separated by interleavers, therefore their performance is not influenced just by the constituent encoders, but also by the interleaver. For short frame turbo codes, the selection of a proper interleaver becomes critical. This paper presents a new algorithm of obtaining a code matched interleaver leading to a very high minimum distance and improved performance.

  1. Code Flows : Visualizing Structural Evolution of Source Code

    NARCIS (Netherlands)

    Telea, Alexandru; Auber, David

    2008-01-01

    Understanding detailed changes done to source code is of great importance in software maintenance. We present Code Flows, a method to visualize the evolution of source code geared to the understanding of fine and mid-level scale changes across several file versions. We enhance an existing visual

  2. Turbo-Gallager Codes: The Emergence of an Intelligent Coding ...

    African Journals Online (AJOL)

    P. C. Catherine. K. M. S Soyjaudah. Department of Electrical and Electronics Engineering ... in the 1960's, Gallager in his PhD thesis worked on low-density parity-check (LDPC) codes (Gallager 1963). ..... In any case however, it is hoped that the ideas behind TG codes will help in the development of future intelligent coding ...

  3. Code flows : Visualizing structural evolution of source code

    NARCIS (Netherlands)

    Telea, Alexandru; Auber, David

    Understanding detailed changes done to source code is of great importance in software maintenance. We present Code Flows, a method to visualize the evolution of source code geared to the understanding of fine and mid-level scale changes across several file versions. We enhance an existing visual

  4. Turbo-Gallager Codes: The Emergence of an Intelligent Coding ...

    African Journals Online (AJOL)

    This work proposes a blend of the two technologies, yielding a code that we nicknamed Turbo-Gallager or TG Code. The code has additional “intelligence” compared to its parents. It detects and corrects the so-called “undetected errors” and recovers from individual decoder failure by making use of a network of decoders.

  5. Probabilistic performance analysis using the SLEUTH fuel modelling code

    Energy Technology Data Exchange (ETDEWEB)

    Palmer, I.D.

    1986-01-01

    The paper describes the development and sample use of a computer code which automates both the Monte Carlo and response surface approaches to probabilistic fuel performance modelling utilising the SLEUTH-82 deterministic program. A number of the statistical procedures employed, which have been prepared as independent computer codes, are also described. These are of general applicability in many areas of probabilistic assessment.

  6. The impact of international codes of conduct on employment ...

    African Journals Online (AJOL)

    The study examined how international codes of conduct address employment conditions and gender issues in the Chinese flower industry. A sample of 20 companies was purposively selected and 200 workers from these companies were interviewed. The adoption of international codes did not improve workers conditions ...

  7. 40 CFR 194.23 - Models and computer codes.

    Science.gov (United States)

    2010-07-01

    ... of the theoretical backgrounds of each model and the method of analysis or assessment; (2) General... input and output files from a sample computer run; and reports on code verification, benchmarking, validation, and quality assurance procedures; (3) Detailed descriptions of the structure of computer codes...

  8. Code Generation with Templates

    CERN Document Server

    Arnoldus, Jeroen; Serebrenik, A

    2012-01-01

    Templates are used to generate all kinds of text, including computer code. The last decade, the use of templates gained a lot of popularity due to the increase of dynamic web applications. Templates are a tool for programmers, and implementations of template engines are most times based on practical experience rather than based on a theoretical background. This book reveals the mathematical background of templates and shows interesting findings for improving the practical use of templates. First, a framework to determine the necessary computational power for the template metalanguage is presen

  9. Cinder begin creative coding

    CERN Document Server

    Rijnieks, Krisjanis

    2013-01-01

    Presented in an easy to follow, tutorial-style format, this book will lead you step-by-step through the multi-faceted uses of Cinder.""Cinder: Begin Creative Coding"" is for people who already have experience in programming. It can serve as a transition from a previous background in Processing, Java in general, JavaScript, openFrameworks, C++ in general or ActionScript to the framework covered in this book, namely Cinder. If you like quick and easy to follow tutorials that will let yousee progress in less than an hour - this book is for you. If you are searching for a book that will explain al

  10. The path of code linting

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Join the path of code linting and discover how it can help you reach higher levels of programming enlightenment. Today we will cover how to embrace code linters to offload cognitive strain on preserving style standards in your code base as well as avoiding error-prone constructs. Additionally, I will show you the journey ahead for integrating several code linters in the programming tools your already use with very little effort.

  11. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  12. Common coding variants of the HNF1A gene are associated with multiple cardiovascular risk phenotypes in community-based samples of younger and older European-American adults: the Coronary Artery Risk Development in Young Adults Study and The Cardiovascular Health Study.

    Science.gov (United States)

    Reiner, Alexander P; Gross, Myron D; Carlson, Christopher S; Bielinski, Suzette J; Lange, Leslie A; Fornage, Myriam; Jenny, Nancy S; Walston, Jeremy; Tracy, Russell P; Williams, O Dale; Jacobs, David R; Nickerson, Deborah A

    2009-06-01

    The transcription factor hepatocyte nuclear factor (HNF)-1 alpha regulates the activity of a number of genes involved in innate immunity, blood coagulation, lipid and glucose transport and metabolism, and cellular detoxification. Common polymorphisms of the HNF-1 alpha gene (HNF1A) were recently associated with plasma C-reactive protein and gamma-glutamyl transferase concentration in middle-aged to older European Americans (EA). We assessed whether common variants of HNF1A are associated with C-reactive protein, gamma-glutamyl transferase, and other atherosclerotic and metabolic risk factors, in the large, population-based Coronary Artery Risk Development in Young Adults Study of healthy young EA (n=2154) and African American (AA; n=2083) adults. The minor alleles of Ile27Leu (rs1169288) and Ser486Asn (rs2464196) were associated with 0.10 to 0.15 standard deviation units lower C-reactive protein and gamma-glutamyl transferase levels in EA. The same HNF1A coding variants were associated with higher low-density lipoprotein cholesterol, apolipoprotein B, creatinine, and fibrinogen in EA. We replicated the associations between HNF1A coding variants and C-reactive protein, fibrinogen, low-density lipoprotein cholesterol, and renal function in a second population-based sample of EA adults 65 years and older from the Cardiovascular Health Study. The HNF1A Ser486Asn and/or Ile27Leu variants were also associated with increased risk of subclinical coronary atherosclerosis in Coronary Artery Risk Development in Young Adults and with incident coronary heart disease in Cardiovascular Health Study. The Ile27Leu and Ser486Asn variants were 3-fold less common in AA than in EA. There was little evidence of association between HNF1A genotype and atherosclerosis-related phenotypes in AA. Common polymorphisms of HNF1A seem to influence multiple phenotypes related to cardiovascular risk in the general population of younger and older EA adults.

  13. Authorship Attribution of Source Code

    Science.gov (United States)

    Tennyson, Matthew F.

    2013-01-01

    Authorship attribution of source code is the task of deciding who wrote a program, given its source code. Applications include software forensics, plagiarism detection, and determining software ownership. A number of methods for the authorship attribution of source code have been presented in the past. A review of those existing methods is…

  14. Coded nanoscale self-assembly

    Indian Academy of Sciences (India)

    the number of starting particles. Figure 6. Coded self-assembly results in specific shapes. When the con- stituent particles are coded to only combine in a certain defined rules, it al- ways manages to generate the same shape. The simplest case of linear coding with multiseed option is presented here. in place the resultant ...

  15. Strongly-MDS convolutional codes

    NARCIS (Netherlands)

    Gluesing-Luerssen, H; Rosenthal, J; Smarandache, R

    Maximum-distance separable (MDS) convolutional codes have the property that their free distance is maximal among all codes of the same rate and the same degree. In this paper, a class of MDS convolutional codes is introduced whose column distances reach the generalized Singleton bound at the

  16. Order functions and evaluation codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pellikaan, Ruud; van Lint, Jack

    1997-01-01

    Based on the notion of an order function we construct and determine the parameters of a class of error-correcting evaluation codes. This class includes the one-point algebraic geometry codes as wella s the generalized Reed-Muller codes and the parameters are detremined without using the heavy...

  17. Coding Issues in Grounded Theory

    Science.gov (United States)

    Moghaddam, Alireza

    2006-01-01

    This paper discusses grounded theory as one of the qualitative research designs. It describes how grounded theory generates from data. Three phases of grounded theory--open coding, axial coding, and selective coding--are discussed, along with some of the issues which are the source of debate among grounded theorists, especially between its…

  18. Product Codes for Optical Communication

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl

    2002-01-01

    Many optical communicaton systems might benefit from forward-error-correction. We present a hard-decision decoding algorithm for the "Block Turbo Codes", suitable for optical communication, which makes this coding-scheme an alternative to Reed-Solomon codes....

  19. Time-Varying Space-Only Codes for Coded MIMO

    CERN Document Server

    Duyck, Dieter; Takawira, Fambirai; Boutros, Joseph J; Moeneclaey, Marc

    2012-01-01

    Multiple antenna (MIMO) devices are widely used to increase reliability and information bit rate. Optimal error rate performance (full diversity and large coding gain), for unknown channel state information at the transmitter and for maximal rate, can be achieved by approximately universal space-time codes, but comes at a price of large detection complexity, infeasible for most practical systems. We propose a new coded modulation paradigm: error-correction outer code with space-only but time-varying precoder (as inner code). We refer to the latter as Ergodic Mutual Information (EMI) code. The EMI code achieves the maximal multiplexing gain and full diversity is proved in terms of the outage probability. Contrary to most of the literature, our work is not based on the elegant but difficult classical algebraic MIMO theory. Instead, the relation between MIMO and parallel channels is exploited. The theoretical proof of full diversity is corroborated by means of numerical simulations for many MIMO scenarios, in te...

  20. Genetic code for sine

    Science.gov (United States)

    Abdullah, Alyasa Gan; Wah, Yap Bee

    2015-02-01

    The computation of the approximate values of the trigonometric sines was discovered by Bhaskara I (c. 600-c.680), a seventh century Indian mathematician and is known as the Bjaskara's I's sine approximation formula. The formula is given in his treatise titled Mahabhaskariya. In the 14th century, Madhava of Sangamagrama, a Kerala mathematician astronomer constructed the table of trigonometric sines of various angles. Madhava's table gives the measure of angles in arcminutes, arcseconds and sixtieths of an arcsecond. The search for more accurate formulas led to the discovery of the power series expansion by Madhava of Sangamagrama (c.1350-c. 1425), the founder of the Kerala school of astronomy and mathematics. In 1715, the Taylor series was introduced by Brook Taylor an English mathematician. If the Taylor series is centered at zero, it is called a Maclaurin series, named after the Scottish mathematician Colin Maclaurin. Some of the important Maclaurin series expansions include trigonometric functions. This paper introduces the genetic code of the sine of an angle without using power series expansion. The genetic code using square root approach reveals the pattern in the signs (plus, minus) and sequence of numbers in the sine of an angle. The square root approach complements the Pythagoras method, provides a better understanding of calculating an angle and will be useful for teaching the concepts of angles in trigonometry.

  1. An Improved Robust Sparse Coding for Face Recognition with Disguise

    Directory of Open Access Journals (Sweden)

    Dexing Zhong

    2012-10-01

    Full Text Available Robust vision-based face recognition is one of most challenging tasks for robots. Recently the sparse representation-based classification (SRC has been proposed to solve the problem. All training samples without disguise are used to compose an over-complete dictionary, and the testing sample with disguise is represented by the dictionary with a sparse coding coefficients plus an error. The coding residuals between the sample and each class of training samples are measured and the minimum of them is the identified class to which the sample belongs. The robust sparse coding (RSC seeks for the MLE (maximum likelihood estimation solution of the sparse coding problem, so it is more robust to disguise. However, the iteratively algorithm to solve RSC is high time consuming. In this paper, we propose an improved robust sparse coding (iRSC algorithm for practical application conditions. During iterations, the dictionary is reduced by eliminating the objects with larger coding residuals. The over-complete property of dictionary is not affected. Experiments on AR face database demonstrate that the coding is sparser and the efficiency is higher in iRSC.

  2. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  3. The Art of Readable Code

    CERN Document Server

    Boswell, Dustin

    2011-01-01

    As programmers, we've all seen source code that's so ugly and buggy it makes our brain ache. Over the past five years, authors Dustin Boswell and Trevor Foucher have analyzed hundreds of examples of "bad code" (much of it their own) to determine why they're bad and how they could be improved. Their conclusion? You need to write code that minimizes the time it would take someone else to understand it-even if that someone else is you. This book focuses on basic principles and practical techniques you can apply every time you write code. Using easy-to-digest code examples from different languag

  4. Sub-Transport Layer Coding

    DEFF Research Database (Denmark)

    Hansen, Jonas; Krigslund, Jeppe; Roetter, Daniel Enrique Lucani

    2014-01-01

    Packet losses in wireless networks dramatically curbs the performance of TCP. This paper introduces a simple coding shim that aids IP-layer traffic in lossy environments while being transparent to transport layer protocols. The proposed coding approach enables erasure correction while being...... oblivious to the congestion control algorithms of the utilised transport layer protocol. Although our coding shim is indifferent towards the transport layer protocol, we focus on the performance of TCP when ran on top of our proposed coding mechanism due to its widespread use. The coding shim provides gains...

  5. Elements of algebraic coding systems

    CERN Document Server

    Cardoso da Rocha, Jr, Valdemar

    2014-01-01

    Elements of Algebraic Coding Systems is an introductory text to algebraic coding theory. In the first chapter, you'll gain inside knowledge of coding fundamentals, which is essential for a deeper understanding of state-of-the-art coding systems. This book is a quick reference for those who are unfamiliar with this topic, as well as for use with specific applications such as cryptography and communication. Linear error-correcting block codes through elementary principles span eleven chapters of the text. Cyclic codes, some finite field algebra, Goppa codes, algebraic decoding algorithms, and applications in public-key cryptography and secret-key cryptography are discussed, including problems and solutions at the end of each chapter. Three appendices cover the Gilbert bound and some related derivations, a derivation of the Mac- Williams' identities based on the probability of undetected error, and two important tools for algebraic decoding-namely, the finite field Fourier transform and the Euclidean algorithm f...

  6. Balanced sampling

    NARCIS (Netherlands)

    Brus, D.J.

    2015-01-01

    In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling

  7. Assessment of the computer code COBRA/CFTL

    Energy Technology Data Exchange (ETDEWEB)

    Baxi, C. B.; Burhop, C. J.

    1981-07-01

    The COBRA/CFTL code has been developed by Oak Ridge National Laboratory (ORNL) for thermal-hydraulic analysis of simulated gas-cooled fast breeder reactor (GCFR) core assemblies to be tested in the core flow test loop (CFTL). The COBRA/CFTL code was obtained by modifying the General Atomic code COBRA*GCFR. This report discusses these modifications, compares the two code results for three cases which represent conditions from fully rough turbulent flow to laminar flow. Case 1 represented fully rough turbulent flow in the bundle. Cases 2 and 3 represented laminar and transition flow regimes. The required input for the COBRA/CFTL code, a sample problem input/output and the code listing are included in the Appendices.

  8. Implementation of Energy Code Controls Requirements in New Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Rosenberg, Michael I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hart, Philip R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hatten, Mike [Solarc Energy Group, LLC, Seattle, WA (United States); Jones, Dennis [Group 14 Engineering, Inc., Denver, CO (United States); Cooper, Matthew [Group 14 Engineering, Inc., Denver, CO (United States)

    2017-03-24

    Most state energy codes in the United States are based on one of two national model codes; ANSI/ASHRAE/IES 90.1 (Standard 90.1) or the International Code Council (ICC) International Energy Conservation Code (IECC). Since 2004, covering the last four cycles of Standard 90.1 updates, about 30% of all new requirements have been related to building controls. These requirements can be difficult to implement and verification is beyond the expertise of most building code officials, yet the assumption in studies that measure the savings from energy codes is that they are implemented and working correctly. The objective of the current research is to evaluate the degree to which high impact controls requirements included in commercial energy codes are properly designed, commissioned and implemented in new buildings. This study also evaluates the degree to which these control requirements are realizing their savings potential. This was done using a three-step process. The first step involved interviewing commissioning agents to get a better understanding of their activities as they relate to energy code required controls measures. The second involved field audits of a sample of commercial buildings to determine whether the code required control measures are being designed, commissioned and correctly implemented and functioning in new buildings. The third step includes compilation and analysis of the information gather during the first two steps. Information gathered during these activities could be valuable to code developers, energy planners, designers, building owners, and building officials.

  9. Peripheral coding of taste

    Science.gov (United States)

    Liman, Emily R.; Zhang, Yali V.; Montell, Craig

    2014-01-01

    Five canonical tastes, bitter, sweet, umami (amino acid), salty and sour (acid) are detected by animals as diverse as fruit flies and humans, consistent with a near universal drive to consume fundamental nutrients and to avoid toxins or other harmful compounds. Surprisingly, despite this strong conservation of basic taste qualities between vertebrates and invertebrates, the receptors and signaling mechanisms that mediate taste in each are highly divergent. The identification over the last two decades of receptors and other molecules that mediate taste has led to stunning advances in our understanding of the basic mechanisms of transduction and coding of information by the gustatory systems of vertebrates and invertebrates. In this review, we discuss recent advances in taste research, mainly from the fly and mammalian systems, and we highlight principles that are common across species, despite stark differences in receptor types. PMID:24607224

  10. Code des baux 2018

    CERN Document Server

    Vial-Pedroletti, Béatrice; Kendérian, Fabien; Chavance, Emmanuelle; Coutan-Lapalus, Christelle

    2017-01-01

    Le code des baux 2018 vous offre un contenu extrêmement pratique, fiable et à jour au 1er août 2017. Cette 16e édition intègre notamment : le décret du 27 juillet 2017 relatif à l’évolution de certains loyers dans le cadre d’une nouvelle location ou d’un renouvellement de bail, pris en application de l’article 18 de la loi n° 89-462 du 6 juillet 1989 ; la loi du 27 janvier 2017 relative à l’égalité et à la citoyenneté ; la loi du 9 décembre 2016 relative à la transparence, à la lutte contre la corruption et à la modernisation de la vie économique ; la loi du 18 novembre 2016 de modernisation de la justice du xxie siècle

  11. [Neural codes for perception].

    Science.gov (United States)

    Romo, R; Salinas, E; Hernández, A; Zainos, A; Lemus, L; de Lafuente, V; Luna, R

    This article describes experiments designed to show the neural codes associated with the perception and processing of tactile information. The results of these experiments have shown the neural activity correlated with tactile perception. The neurones of the primary somatosensory cortex (S1) represent the physical attributes of tactile perception. We found that these representations correlated with tactile perception. By means of intracortical microstimulation we demonstrated the causal relationship between S1 activity and tactile perception. In the motor areas of the frontal lobe is to be found the connection between sensorial and motor representation whilst decisions are being taken. S1 generates neural representations of the somatosensory stimuli which seen to be sufficient for tactile perception. These neural representations are subsequently processed by central areas to S1 and seem useful in perception, memory and decision making.

  12. Code-labelling

    DEFF Research Database (Denmark)

    Spangsberg, Thomas Hvid; Brynskov, Martin

    in programming education collected in an Action Research cycle. The results support the use of a structural approach to teaching programming to this target audience; particularly, the translation-grammar method seems to integrate well with programming education. The paper also explores the potential underlying......The code-labelling exercise is an attempt to apply natural language education techniques for solving the challenge of teaching introductory programming to non-STEM novices in higher education. This paper presents findings from a study exploring the use of natural language teaching techniques...... reasons. It seems the exercise invokes an assimilation of student's existing cognitive schemata and supports a deep-learning experience. The exercise is an invitation to other teachers to create further iterations to improve their own teaching. It also seeks to enrich the portfolio of teaching activities...

  13. Transionospheric Propagation Code (TIPC)

    Science.gov (United States)

    Roussel-Dupre, Robert; Kelley, Thomas A.

    1990-10-01

    The Transionospheric Propagation Code is a computer program developed at Los Alamos National Lab to perform certain tasks related to the detection of VHF signals following propagation through the ionosphere. The code is written in FORTRAN 77, runs interactively and was designed to be as machine independent as possible. A menu format in which the user is prompted to supply appropriate parameters for a given task has been adopted for the input while the output is primarily in the form of graphics. The user has the option of selecting from five basic tasks, namely transionospheric propagation, signal filtering, signal processing, delta times of arrival (DTOA) study, and DTOA uncertainty study. For the first task a specified signal is convolved against the impulse response function of the ionosphere to obtain the transionospheric signal. The user is given a choice of four analytic forms for the input pulse or of supplying a tabular form. The option of adding Gaussian-distributed white noise of spectral noise to the input signal is also provided. The deterministic ionosphere is characterized to first order in terms of a total electron content (TEC) along the propagation path. In addition, a scattering model parameterized in terms of a frequency coherence bandwidth is also available. In the second task, detection is simulated by convolving a given filter response against the transionospheric signal. The user is given a choice of a wideband filter or a narrowband Gaussian filter. It is also possible to input a filter response. The third task provides for quadrature detection, envelope detection, and three different techniques for time-tagging the arrival of the transionospheric signal at specified receivers. The latter algorithms can be used to determine a TEC and thus take out the effects of the ionosphere to first order. Task four allows the user to construct a table of DTOAs vs TECs for a specified pair of receivers.

  14. Galois LCD Codes over Finite Fields

    OpenAIRE

    Liu, Xiusheng; Fan, Yun; Liu, Hualu

    2017-01-01

    In this paper, we study the complementary dual codes in more general setting (which are called Galois LCD codes) by a uniform method. A necessary and sufficient condition for linear codes to be Galois LCD codes is determined, and constacyclic codes to be Galois LCD codes are characterized. Some illustrative examples which constacyclic codes are Galois LCD MDS codes are provided as well. In particular, we study Hermitian LCD constacyclic codes. Finally, we present a construction of a class of ...

  15. Quantum Quasi-Cyclic LDPC Codes

    OpenAIRE

    Hagiwara, Manabu; Imai, Hideki

    2007-01-01

    In this paper, a construction of a pair of "regular" quasi-cyclic LDPC codes as ingredient codes for a quantum error-correcting code is proposed. That is, we find quantum regular LDPC codes with various weight distributions. Furthermore our proposed codes have lots of variations for length, code rate. These codes are obtained by a descrete mathematical characterization for model matrices of quasi-cyclic LDPC codes. Our proposed codes achieve a bounded distance decoding (BDD) bound, or known a...

  16. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  17. New tools to analyze overlapping coding regions.

    Science.gov (United States)

    Bayegan, Amir H; Garcia-Martin, Juan Antonio; Clote, Peter

    2016-12-13

    Retroviruses transcribe messenger RNA for the overlapping Gag and Gag-Pol polyproteins, by using a programmed -1 ribosomal frameshift which requires a slippery sequence and an immediate downstream stem-loop secondary structure, together called frameshift stimulating signal (FSS). It follows that the molecular evolution of this genomic region of HIV-1 is highly constrained, since the retroviral genome must contain a slippery sequence (sequence constraint), code appropriate peptides in reading frames 0 and 1 (coding requirements), and form a thermodynamically stable stem-loop secondary structure (structure requirement). We describe a unique computational tool, RNAsampleCDS, designed to compute the number of RNA sequences that code two (or more) peptides p,q in overlapping reading frames, that are identical (or have BLOSUM/PAM similarity that exceeds a user-specified value) to the input peptides p,q. RNAsampleCDS then samples a user-specified number of messenger RNAs that code such peptides; alternatively, RNAsampleCDS can exactly compute the position-specific scoring matrix and codon usage bias for all such RNA sequences. Our software allows the user to stipulate overlapping coding requirements for all 6 possible reading frames simultaneously, even allowing IUPAC constraints on RNA sequences and fixing GC-content. We generalize the notion of codon preference index (CPI) to overlapping reading frames, and use RNAsampleCDS to generate control sequences required in the computation of CPI. Moreover, by applying RNAsampleCDS, we are able to quantify the extent to which the overlapping coding requirement in HIV-1 [resp. HCV] contribute to the formation of the stem-loop [resp. double stem-loop] secondary structure known as the frameshift stimulating signal. Using our software, we confirm that certain experimentally determined deleterious HCV mutations occur in positions for which our software RNAsampleCDS and RNAiFold both indicate a single possible nucleotide. We

  18. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    Directory of Open Access Journals (Sweden)

    Marinkovic Slavica

    2006-01-01

    Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  19. Benchmarking Tokamak edge modelling codes

    Science.gov (United States)

    Contributors To The Efda-Jet Work Programme; Coster, D. P.; Bonnin, X.; Corrigan, G.; Kirnev, G. S.; Matthews, G.; Spence, J.; Contributors to the EFDA-JET work programme

    2005-03-01

    Tokamak edge modelling codes are in widespread use to interpret and understand existing experiments, and to make predictions for future machines. Little direct benchmarking has been done between the codes, and the users of the codes have tended to concentrate on different experimental machines. An important validation step is to compare the codes for identical scenarios. In this paper, two of the major edge codes, SOLPS (B2.5-Eirene) and EDGE2D-NIMBUS are benchmarked against each other. A set of boundary conditions, transport coefficients, etc. for a JET plasma were chosen, and the two codes were run on the same grid. Initially, large differences were seen in the resulting plasmas. These differences were traced to differing physics assumptions with respect to the parallel heat flux limits. Once these were switched off in SOLPS, or implemented and switched on in EDGE2D-NIMBUS, the remaining differences were small.

  20. Low complexity hevc intra coding

    OpenAIRE

    Ruiz Coll, José Damián

    2016-01-01

    Over the last few decades, much research has focused on the development and optimization of video codecs for media distribution to end-users via the Internet, broadcasts or mobile networks, but also for videoconferencing and for the recording on optical disks for media distribution. Most of the video coding standards for delivery are characterized by using a high efficiency hybrid schema, based on inter-prediction coding for temporal picture decorrelation, and intra-prediction coding for spat...

  1. IRIG Serial Time Code Formats

    Science.gov (United States)

    2016-08-01

    and G. It should be noted that this standard reflects the present state of the art in serial time code formatting and is not intended to constrain...separation for visual resolution. The LSB occurs first except for the fractional seconds subword that follows the day-of-year subword. The BCD TOY code...and P6 to complete the BCD time code word. An index marker occurs between the decimal digits in each subword to provide separation for visual

  2. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  3. Indices for Testing Neural Codes

    OpenAIRE

    Jonathan D. Victor; Nirenberg, Sheila

    2008-01-01

    One of the most critical challenges in systems neuroscience is determining the neural code. A principled framework for addressing this can be found in information theory. With this approach, one can determine whether a proposed code can account for the stimulus-response relationship. Specifically, one can compare the transmitted information between the stimulus and the hypothesized neural code with the transmitted information between the stimulus and the behavioral response. If the former is ...

  4. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-04-11

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaickingand 4D light field view synthesis.

  5. Coding, cryptography and combinatorics

    CERN Document Server

    Niederreiter, Harald; Xing, Chaoping

    2004-01-01

    It has long been recognized that there are fascinating connections between cod­ ing theory, cryptology, and combinatorics. Therefore it seemed desirable to us to organize a conference that brings together experts from these three areas for a fruitful exchange of ideas. We decided on a venue in the Huang Shan (Yellow Mountain) region, one of the most scenic areas of China, so as to provide the additional inducement of an attractive location. The conference was planned for June 2003 with the official title Workshop on Coding, Cryptography and Combi­ natorics (CCC 2003). Those who are familiar with events in East Asia in the first half of 2003 can guess what happened in the end, namely the conference had to be cancelled in the interest of the health of the participants. The SARS epidemic posed too serious a threat. At the time of the cancellation, the organization of the conference was at an advanced stage: all invited speakers had been selected and all abstracts of contributed talks had been screened by the p...

  6. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-12-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  7. The FLUKA code: an overview

    Energy Technology Data Exchange (ETDEWEB)

    Ballarini, F [University of Pavia and INFN (Italy); Battistoni, G [University of Milan and INFN (Italy); Campanella, M; Carboni, M; Cerutti, F [University of Milan and INFN (Italy); Empl, A [University of Houston, Houston (United States); Fasso, A [SLAC, Stanford (United States); Ferrari, A [CERN, CH-1211 Geneva (Switzerland); Gadioli, E [University of Milan and INFN (Italy); Garzelli, M V [University of Milan and INFN (Italy); Lantz, M [University of Milan and INFN (Italy); Liotta, M [University of Pavia and INFN (Italy); Mairani, A [University of Pavia and INFN (Italy); Mostacci, A [Laboratori Nazionali di Frascati, INFN (Italy); Muraro, S [University of Milan and INFN (Italy); Ottolenghi, A [University of Pavia and INFN (Italy); Pelliccioni, M [Laboratori Nazionali di Frascati, INFN (Italy); Pinsky, L [University of Houston, Houston (United States); Ranft, J [Siegen University, Siegen (Germany); Roesler, S [CERN, CH-1211 Geneva (Switzerland); Sala, P R [University of Milan and INFN (Italy); Scannicchio, D [University of Pavia and INFN (Italy); Trovati, S [University of Pavia and INFN (Italy); Villari, R; Wilson, T [Johnson Space Center, NASA (United States); Zapp, N [Johnson Space Center, NASA (United States); Vlachoudis, V [CERN, CH-1211 Geneva (Switzerland)

    2006-05-15

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  8. The FLUKA Code: an Overview

    Energy Technology Data Exchange (ETDEWEB)

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M.V.; Lantz, M.; Liotta, M.; Mairani,; Mostacci, A.; Muraro, S.; Ottolenghi, A.; Pelliccioni, M.; Pinsky, L.; Ranft, J.; Roesler, S.; Sala, P.R.; /Milan U. /INFN, Milan /Pavia U. /INFN, Pavia /CERN /Siegen U.

    2005-11-09

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  9. Understanding perception through neural "codes".

    Science.gov (United States)

    Freeman, Walter J

    2011-07-01

    A major challenge for cognitive scientists is to deduce and explain the neural mechanisms of the rapid transposition between stimulus energy and recalled memory-between the specific (sensation) and the generic (perception)-in both material and mental aspects. Researchers are attempting three explanations in terms of neural codes. The microscopic code: cellular neurobiologists correlate stimulus properties with the rates and frequencies of trains of action potentials induced by stimuli and carried by topologically organized axons. The mesoscopic code: cognitive scientists formulate symbolic codes in trains of action potentials from feature-detector neurons of phonemes, lines, odorants, vibrations, faces, etc., that object-detector neurons bind into representations of stimuli. The macroscopic code: neurodynamicists extract neural correlates of stimuli and associated behaviors in spatial patterns of oscillatory fields of dendritic activity, which self-organize and evolve on trajectories through high-dimensional brain state space. This multivariate code is expressed in landscapes of chaotic attractors. Unlike other scientific codes, such as DNA and the periodic table, these neural codes have no alphabet or syntax. They are epistemological metaphors that experimentalists need to measure neural activity and engineers need to model brain functions. My aim is to describe the main properties of the macroscopic code and the grand challenge it poses: how do very large patterns of textured synchronized oscillations form in cortex so quickly? © 2010 IEEE

  10. Programming Entity Framework Code First

    CERN Document Server

    Lerman, Julia

    2011-01-01

    Take advantage of the Code First data modeling approach in ADO.NET Entity Framework, and learn how to build and configure a model based on existing classes in your business domain. With this concise book, you'll work hands-on with examples to learn how Code First can create an in-memory model and database by default, and how you can exert more control over the model through further configuration. Code First provides an alternative to the database first and model first approaches to the Entity Data Model. Learn the benefits of defining your model with code, whether you're working with an exis

  11. High Order Modulation Protograph Codes

    Science.gov (United States)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  12. The FLUKA code: an overview

    Science.gov (United States)

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fassò, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; Lantz, M.; Liotta, M.; Mairani, A.; Mostacci, A.; Muraro, S.; Ottolenghi, A.; Pelliccioni, M.; Pinsky, L.; Ranft, J.; Roesler, S.; Sala, P. R.; Scannicchio, D.; Trovati, S.; Villari, R.; Wilson, T.; Zapp, N.; Vlachoudis, V.

    2006-05-01

    FLUKA is a multipurpose MonteCarlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  13. Golay and other box codes

    Science.gov (United States)

    Solomon, G.

    1992-01-01

    The (24,12;8) extended Golay Code can be generated as a 6x4 binary matrix from the (15,11;3) BCH-Hamming Code, represented as a 5 x 3 matrix, by adding a row and a column, both of odd or even parity. The odd-parity case provides the additional 12th dimension. Furthermore, any three columns and five rows of the 6 x 4 Golay form a BCH-Hamming (15,11;3) Code. Similarly a (80,58;8) code can be generated as a 10 x 8 binary matrix from the (63,57;3) BCH-Hamming Code represented as a 9 x 7 matrix by adding a row and a column both of odd and even parity. Furthermore, any seven columns along with the top nine rows is a BCH-Hamming (63,57;3) Code. A (80,40;16) 10 x 8 matrix binary code with weight structure identical to the extended (80,40;16) Quadratic Residue Code is generated from a (63,39;7) binary cyclic code represented as a 9 x 7 matrix, by adding a row and a column, both of odd or even parity.

  14. An algebraic approach to graph codes

    DEFF Research Database (Denmark)

    Pinero, Fernando

    theory as evaluation codes. Chapter three consists of the introduction to graph based codes, such as Tanner codes and graph codes. In Chapter four, we compute the dimension of some graph based codes with a result combining graph based codes and subfield subcodes. Moreover, some codes in chapter four...... are optimal or best known for their parameters. In chapter five we study some graph codes with Reed–Solomon component codes. The underlying graph is well known and widely used for its good characteristics. This helps us to compute the dimension of the graph codes. We also introduce a combinatorial concept...... related to the iterative encoding of graph codes with MDS component code. The last chapter deals with affine Grassmann codes and Grassmann codes. We begin with some previously known codes and prove that they are also Tanner codes of the incidence graph of the point–line partial geometry...

  15. QR code for medical information uses.

    Science.gov (United States)

    Fontelo, Paul; Liu, Fang; Ducut, Erick G

    2008-11-06

    We developed QR code online tools, simulated and tested QR code applications for medical information uses including scanning QR code labels, URLs and authentication. Our results show possible applications for QR code in medicine.

  16. CODE: EB10

    African Journals Online (AJOL)

    Philippe

    (KIE). A correlation research design was used and the null hypothesis was tested on a sample of 151 respondents from a population of 318 employees in KIE. One of the major findings was that there is a significant relationship between employees‟ persistence and job factors (supervisory support and coaching, task design ...

  17. Adaptive subband coding of full motion video

    Science.gov (United States)

    Sharifi, Kamran; Xiao, Leping; Leon-Garcia, Alberto

    1993-10-01

    In this paper a new algorithm for digital video coding is presented that is suitable for digital storage and video transmission applications in the range of 5 to 10 Mbps. The scheme is based on frame differencing and, unlike recent proposals, does not employ motion estimation and compensation. A novel adaptive grouping structure is used to segment the video sequence into groups of frames of variable sizes. Within each group, the frame difference is taken in a closed loop Differential Pulse Code Modulation (DPCM) structure and then decomposed into different frequency subbands. The important subbands are transformed using the Discrete Cosine Transform (DCT) and the resulting coefficients are adaptively quantized and runlength coded. The adaptation is based on the variance of sample values in each subband. To reduce the computation load, a very simple and efficient way has been used to estimate the variance of the subbands. It is shown that for many types of sequences, the performance of the proposed coder is comparable to that of coding methods which use motion parameters.

  18. Allele coding in genomic evaluation

    Directory of Open Access Journals (Sweden)

    Christensen Ole F

    2011-06-01

    Full Text Available Abstract Background Genomic data are used in animal breeding to assist genetic evaluation. Several models to estimate genomic breeding values have been studied. In general, two approaches have been used. One approach estimates the marker effects first and then, genomic breeding values are obtained by summing marker effects. In the second approach, genomic breeding values are estimated directly using an equivalent model with a genomic relationship matrix. Allele coding is the method chosen to assign values to the regression coefficients in the statistical model. A common allele coding is zero for the homozygous genotype of the first allele, one for the heterozygote, and two for the homozygous genotype for the other allele. Another common allele coding changes these regression coefficients by subtracting a value from each marker such that the mean of regression coefficients is zero within each marker. We call this centered allele coding. This study considered effects of different allele coding methods on inference. Both marker-based and equivalent models were considered, and restricted maximum likelihood and Bayesian methods were used in inference. Results Theoretical derivations showed that parameter estimates and estimated marker effects in marker-based models are the same irrespective of the allele coding, provided that the model has a fixed general mean. For the equivalent models, the same results hold, even though different allele coding methods lead to different genomic relationship matrices. Calculated genomic breeding values are independent of allele coding when the estimate of the general mean is included into the values. Reliabilities of estimated genomic breeding values calculated using elements of the inverse of the coefficient matrix depend on the allele coding because different allele coding methods imply different models. Finally, allele coding affects the mixing of Markov chain Monte Carlo algorithms, with the centered coding being

  19. Civil Code, 11 December 1987.

    Science.gov (United States)

    1988-01-01

    Article 162 of this Mexican Code provides, among other things, that "Every person has the right freely, responsibly, and in an informed fashion to determine the number and spacing of his or her children." When a marriage is involved, this right is to be observed by the spouses "in agreement with each other." The civil codes of the following states contain the same provisions: 1) Baja California (Art. 159 of the Civil Code of 28 April 1972 as revised in Decree No. 167 of 31 January 1974); 2) Morelos (Art. 255 of the Civil Code of 26 September 1949 as revised in Decree No. 135 of 29 December 1981); 3) Queretaro (Art. 162 of the Civil Code of 29 December 1950 as revised in the Act of 9 January 1981); 4) San Luis Potosi (Art. 147 of the Civil Code of 24 March 1946 as revised in 13 June 1978); Sinaloa (Art. 162 of the Civil Code of 18 June 1940 as revised in Decree No. 28 of 14 October 1975); 5) Tamaulipas (Art. 146 of the Civil Code of 21 November 1960 as revised in Decree No. 20 of 30 April 1975); 6) Veracruz-Llave (Art. 98 of the Civil Code of 1 September 1932 as revised in the Act of 30 December 1975); and 7) Zacatecas (Art. 253 of the Civil Code of 9 February 1965 as revised in Decree No. 104 of 13 August 1975). The Civil Codes of Puebla and Tlaxcala provide for this right only in the context of marriage with the spouses in agreement. See Art. 317 of the Civil Code of Puebla of 15 April 1985 and Article 52 of the Civil Code of Tlaxcala of 31 August 1976 as revised in Decree No. 23 of 2 April 1984. The Family Code of Hidalgo requires as a formality of marriage a certification that the spouses are aware of methods of controlling fertility, responsible parenthood, and family planning. In addition, Article 22 the Civil Code of the Federal District provides that the legal capacity of natural persons is acquired at birth and lost at death; however, from the moment of conception the individual comes under the protection of the law, which is valid with respect to the

  20. Error coding simulations in C

    Science.gov (United States)

    Noble, Viveca K.

    1994-10-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  1. Asymmetric Quantum Codes on Toric Surfaces

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    2017-01-01

    Asymmetric quantum error-correcting codes are quantum codes defined over biased quantum channels: qubit-flip and phase-shift errors may have equal or different probabilities. The code construction is the Calderbank-Shor-Steane construction based on two linear codes. We present families of toric...... surfaces, toric codes and associated asymmetric quantum error-correcting codes....

  2. LFSC - Linac Feedback Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, Valentin; /Fermilab

    2008-05-01

    The computer program LFSC (Code>) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output.

  3. Code & order in polygonal billiards

    OpenAIRE

    Bobok, Jozef; Troubetzkoy, Serge

    2011-01-01

    Two polygons $P,Q$ are code equivalent if there are billiard orbits $u,v$ which hit the same sequence of sides and such that the projections of the orbits are dense in the boundaries $\\partial P, \\partial Q$. Our main results show when code equivalent polygons have the same angles, resp. are similar, resp. affinely similar.

  4. Distributed space-time coding

    CERN Document Server

    Jing, Yindi

    2014-01-01

    Distributed Space-Time Coding (DSTC) is a cooperative relaying scheme that enables high reliability in wireless networks. This brief presents the basic concept of DSTC, its achievable performance, generalizations, code design, and differential use. Recent results on training design and channel estimation for DSTC and the performance of training-based DSTC are also discussed.

  5. Grassmann codes and Schubert unions

    DEFF Research Database (Denmark)

    Hansen, Johan Peder; Johnsen, Trygve; Ranestad, Kristian

    2009-01-01

    We study subsets of Grassmann varieties over a field , such that these subsets are unions of Schubert cycles, with respect to a fixed flag. We study such sets in detail, and give applications to coding theory, in particular for Grassmann codes. For much is known about such Schubert unions with a ...

  6. NETWORK CODING BY BEAM FORMING

    DEFF Research Database (Denmark)

    2013-01-01

    Network coding by beam forming in networks, for example, in single frequency networks, can provide aid in increasing spectral efficiency. When network coding by beam forming and user cooperation are combined, spectral efficiency gains may be achieved. According to certain embodiments, a method...

  7. Code breaking in the pacific

    CERN Document Server

    Donovan, Peter

    2014-01-01

    Covers the historical context and the evolution of the technically complex Allied Signals Intelligence (Sigint) activity against Japan from 1920 to 1945 Describes, explains and analyzes the code breaking techniques developed during the war in the Pacific Exposes the blunders (in code construction and use) made by the Japanese Navy that led to significant US Naval victories

  8. Squares of Random Linear Codes

    DEFF Research Database (Denmark)

    Cascudo Pueyo, Ignacio; Cramer, Ronald; Mirandola, Diego

    2015-01-01

    Given a linear code $C$, one can define the $d$-th power of $C$ as the span of all componentwise products of $d$ elements of $C$. A power of $C$ may quickly fill the whole space. Our purpose is to answer the following question: does the square of a code ``typically'' fill the whole space? We give...

  9. Interleaver Design for Turbo Coding

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Zyablov, Viktor

    1997-01-01

    By a combination of construction and random search based on a careful analysis of the low weight words and the distance properties of the component codes, it is possible to find interleavers for turbo coding with a high minimum distance. We have designed a block interleaver with permutations...

  10. Flow Analysis of Code Customizations

    DEFF Research Database (Denmark)

    Hessellund, Anders; Sestoft, Peter

    2008-01-01

    Inconsistency between metadata and code customizations is a major concern in modern, configurable enterprise systems. The increasing reliance on metadata, in the form of XML files, and code customizations, in the form of Java files, has led to a hybrid development platform. The expected consisten...

  11. Recommendations for ECG diagnostic coding

    NARCIS (Netherlands)

    Bonner, R.E.; Caceres, C.A.; Cuddy, T.E.; Meijler, F.L.; Milliken, J.A.; Rautaharju, P.M.; Robles de Medina, E.O.; Willems, J.L.; Wolf, H.K.; Working Group 'Diagnostic Codes'

    1978-01-01

    The Oxford dictionary defines code as "a body of laws so related to each other as to avoid inconsistency and overlapping". It is obvious that natural language with its high degree of ambiguity does not qualify as a code in the sense of this definition. Everyday experiences provide ample evidence

  12. Language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik

    1998-01-01

    This article has two aims: [1] to present a revised version of the sampling method that was originally proposed in 1993 by Rijkhoff, Bakker, Hengeveld and Kahrel, and [2] to discuss a number of other approaches to language sampling in the light of our own method. We will also demonstrate how our...... sampling method is used with different genetic classifications (Voegelin & Voegelin 1977, Ruhlen 1987, Grimes ed. 1997) and argue that —on the whole— our sampling technique compares favourably with other methods, especially in the case of exploratory research....

  13. Similarities between Students Receiving Dress Code Violations and Discipline Referrals at Newport Junior High School

    Science.gov (United States)

    Nicholson, Nikki

    2007-01-01

    Background: Looking at dress code violations and demographics surrounding kids breaking the rules. Purpose: To see if there is a connection between dress code violations and discipline referrals. Setting: Jr. High School; Study Sample: Students with dress code violations for one week; Intervention: N/A; Research Design: Correlational; and Control…

  14. Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Faber, M.H.; Sørensen, John Dalsgaard

    2003-01-01

    The present paper addresses fundamental concepts of reliability based code calibration. First basic principles of structural reliability theory are introduced and it is shown how the results of FORM based reliability analysis may be related to partial safety factors and characteristic values....... Thereafter the code calibration problem is presented in its principal decision theoretical form and it is discussed how acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for ultimate...... and serviceability limit states. Finally the paper describes the Joint Committee on Structural Safety (JCSS) recommended procedure - CodeCal - for the practical implementation of reliability based code calibration of LRFD based design codes....

  15. What Froze the Genetic Code?

    Science.gov (United States)

    Ribas de Pouplana, Lluís; Torres, Adrian Gabriel; Rafels-Ybern, Àlbert

    2017-04-05

    The frozen accident theory of the Genetic Code was a proposal by Francis Crick that attempted to explain the universal nature of the Genetic Code and the fact that it only contains information for twenty amino acids. Fifty years later, it is clear that variations to the universal Genetic Code exist in nature and that translation is not limited to twenty amino acids. However, given the astonishing diversity of life on earth, and the extended evolutionary time that has taken place since the emergence of the extant Genetic Code, the idea that the translation apparatus is for the most part immobile remains true. Here, we will offer a potential explanation to the reason why the code has remained mostly stable for over three billion years, and discuss some of the mechanisms that allow species to overcome the intrinsic functional limitations of the protein synthesis machinery.

  16. What Froze the Genetic Code?

    Directory of Open Access Journals (Sweden)

    Lluís Ribas de Pouplana

    2017-04-01

    Full Text Available The frozen accident theory of the Genetic Code was a proposal by Francis Crick that attempted to explain the universal nature of the Genetic Code and the fact that it only contains information for twenty amino acids. Fifty years later, it is clear that variations to the universal Genetic Code exist in nature and that translation is not limited to twenty amino acids. However, given the astonishing diversity of life on earth, and the extended evolutionary time that has taken place since the emergence of the extant Genetic Code, the idea that the translation apparatus is for the most part immobile remains true. Here, we will offer a potential explanation to the reason why the code has remained mostly stable for over three billion years, and discuss some of the mechanisms that allow species to overcome the intrinsic functional limitations of the protein synthesis machinery.

  17. Tristan code and its application

    Science.gov (United States)

    Nishikawa, K.-I.

    Since TRISTAN: The 3-D Electromagnetic Particle Code was introduced in 1990, it has been used for many applications including the simulations of global solar windmagnetosphere interaction. The most essential ingridients of this code have been published in the ISSS-4 book. In this abstract we describe some of issues and an application of this code for the study of global solar wind-magnetosphere interaction including a substorm study. The basic code (tristan.f) for the global simulation and a local simulation of reconnection with a Harris model (issrec2.f) are available at http:/www.physics.rutger.edu/˜kenichi. For beginners the code (isssrc2.f) with simpler boundary conditions is suitable to start to run simulations. The future of global particle simulations for a global geospace general circulation (GGCM) model with predictive capability (for Space Weather Program) is discussed.

  18. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  19. Non-Protein Coding RNAs

    CERN Document Server

    Walter, Nils G; Batey, Robert T

    2009-01-01

    This book assembles chapters from experts in the Biophysics of RNA to provide a broadly accessible snapshot of the current status of this rapidly expanding field. The 2006 Nobel Prize in Physiology or Medicine was awarded to the discoverers of RNA interference, highlighting just one example of a large number of non-protein coding RNAs. Because non-protein coding RNAs outnumber protein coding genes in mammals and other higher eukaryotes, it is now thought that the complexity of organisms is correlated with the fraction of their genome that encodes non-protein coding RNAs. Essential biological processes as diverse as cell differentiation, suppression of infecting viruses and parasitic transposons, higher-level organization of eukaryotic chromosomes, and gene expression itself are found to largely be directed by non-protein coding RNAs. The biophysical study of these RNAs employs X-ray crystallography, NMR, ensemble and single molecule fluorescence spectroscopy, optical tweezers, cryo-electron microscopy, and ot...

  20. High efficiency video coding coding tools and specification

    CERN Document Server

    Wien, Mathias

    2015-01-01

    The video coding standard High Efficiency Video Coding (HEVC) targets at improved compression performance for video resolutions of HD and beyond, providing Ultra HD video at similar compressed bit rates as for HD video encoded with the well-established video coding standard H.264 | AVC. Based on known concepts, new coding structures and improved coding tools have been developed and specified in HEVC. The standard is expected to be taken up easily by established industry as well as new endeavors, answering the needs of todays connected and ever-evolving online world. This book presents the High Efficiency Video Coding standard and explains it in a clear and coherent language. It provides a comprehensive and consistently written description, all of a piece. The book targets at both, newbies to video coding as well as experts in the field. While providing sections with introductory text for the beginner, it suits as a well-arranged reference book for the expert. The book provides a comprehensive reference for th...

  1. Orthogonality of binary codes derived from Reed-Solomon codes

    Science.gov (United States)

    Retter, Charles T.

    1991-07-01

    A simple method is developed for determining the orthogonality of binary codes derived from Reed-Solomon codes and other cyclic codes of length (2 exp m) - 1 over GF(2 exp m) for m bits. Depending on the spectra of the codes, it is sufficient to test a small number of single-frequency pairs for orthogonality, and a pair of bases may be tested in each case simply by summing the appropriate powers of elements of the dual bases. This simple test can be used to find self-orthogonal codes. For even values of m, the author presents a technique that can be used to choose a basis that produces a self-orthogonal, doubly-even code in certain cases, particularly when m is highly composite. If m is a power of 2, this technique can be used to find self-dual bases for GF(2 exp m). Although the primary emphasis is on testing for self orthogonality, the fundamental theorems presented apply also to the orthogonality of two different codes.

  2. An improved canine genome and a comprehensive catalogue of coding genes and non-coding transcripts.

    Directory of Open Access Journals (Sweden)

    Marc P Hoeppner

    Full Text Available The domestic dog, Canis familiaris, is a well-established model system for mapping trait and disease loci. While the original draft sequence was of good quality, gaps were abundant particularly in promoter regions of the genome, negatively impacting the annotation and study of candidate genes. Here, we present an improved genome build, canFam3.1, which includes 85 MB of novel sequence and now covers 99.8% of the euchromatic portion of the genome. We also present multiple RNA-Sequencing data sets from 10 different canine tissues to catalog ∼175,000 expressed loci. While about 90% of the coding genes previously annotated by EnsEMBL have measurable expression in at least one sample, the number of transcript isoforms detected by our data expands the EnsEMBL annotations by a factor of four. Syntenic comparison with the human genome revealed an additional ∼3,000 loci that are characterized as protein coding in human and were also expressed in the dog, suggesting that those were previously not annotated in the EnsEMBL canine gene set. In addition to ∼20,700 high-confidence protein coding loci, we found ∼4,600 antisense transcripts overlapping exons of protein coding genes, ∼7,200 intergenic multi-exon transcripts without coding potential, likely candidates for long intergenic non-coding RNAs (lincRNAs and ∼11,000 transcripts were reported by two different library construction methods but did not fit any of the above categories. Of the lincRNAs, about 6,000 have no annotated orthologs in human or mouse. Functional analysis of two novel transcripts with shRNA in a mouse kidney cell line altered cell morphology and motility. All in all, we provide a much-improved annotation of the canine genome and suggest regulatory functions for several of the novel non-coding transcripts.

  3. An architecture for hybrid coding of NTSC TV signals

    Science.gov (United States)

    Jalali, A.; Rao, K. R.

    A hardware feasible architecture for DCT/DPCM hybrid coding of color television (TV) signals has been developed. The coding system is based on formatting four horizontal scan lines into blocks of the same subcarrier phase elements. The samples in each block are rearranged and transformed by a FDCT processor. Based on its average energy, a given transform block is compared with its adjacent blocks and the nearest block is selected as its estimate. The difference between the actual and the estimated values of the (DCT) coefficients are then quantized and encoded using nonuniform quantizers and a variable length coder. Furthermore, the maximum number of different wordlengths is assumed to be five. Therefore, five sets of 256-byte encoding ROMs are used to store the quantization tables. To reduce the redundancy in the code words, an adaptive coding scheme is used. The coding scheme is based on setting two threshold levels.

  4. Utilization of recently developed codes for high power Brayton and Rankine cycle power systems

    Science.gov (United States)

    Doherty, Michael P.

    1993-01-01

    Two recently developed FORTRAN computer codes for high power Brayton and Rankine thermodynamic cycle analysis for space power applications are presented. The codes were written in support of an effort to develop a series of subsystem models for multimegawatt Nuclear Electric Propulsion, but their use is not limited just to nuclear heat sources or to electric propulsion. Code development background, a description of the codes, some sample input/output from one of the codes, and state future plans/implications for the use of these codes by NASA's Lewis Research Center are provided.

  5. Venous Sampling

    Science.gov (United States)

    ... neck to help locate abnormally functioning glands or pituitary adenoma . This test is most often used after an unsuccessful neck exploration. Inferior petrosal sinus sampling , in which blood samples are taken from veins that drain the pituitary gland to study disorders related to pituitary hormone ...

  6. Sampling Development

    Science.gov (United States)

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  7. Language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik

    1998-01-01

    This article has two aims: [1] to present a revised version of the sampling method that was originally proposed in 1993 by Rijkhoff, Bakker, Hengeveld and Kahrel, and [2] to discuss a number of other approaches to language sampling in the light of our own method. We will also demonstrate how our...

  8. Environmental sampling

    Energy Technology Data Exchange (ETDEWEB)

    Puckett, J.M.

    1998-12-31

    Environmental Sampling (ES) is a technology option that can have application in transparency in nuclear nonproliferation. The basic process is to take a sample from the environment, e.g., soil, water, vegetation, or dust and debris from a surface, and through very careful sample preparation and analysis, determine the types, elemental concentration, and isotopic composition of actinides in the sample. The sample is prepared and the analysis performed in a clean chemistry laboratory (CCL). This ES capability is part of the IAEA Strengthened Safeguards System. Such a Laboratory is planned to be built by JAERI at Tokai and will give Japan an intrinsic ES capability. This paper presents options for the use of ES as a transparency measure for nuclear nonproliferation.

  9. GOSSIP: SED fitting code

    Science.gov (United States)

    Franzetti, Paolo; Scodeggio, Marco

    2012-10-01

    GOSSIP fits the electro-magnetic emission of an object (the SED, Spectral Energy Distribution) against synthetic models to find the simulated one that best reproduces the observed data. It builds-up the observed SED of an object (or a large sample of objects) combining magnitudes in different bands and eventually a spectrum; then it performs a chi-square minimization fitting procedure versus a set of synthetic models. The fitting results are used to estimate a number of physical parameters like the Star Formation History, absolute magnitudes, stellar mass and their Probability Distribution Functions.

  10. Users manual for coordinate generation code CRDSRA

    Science.gov (United States)

    Shamroth, S. J.

    1985-01-01

    Generation of a viable coordinate system represents an important component of an isolated airfoil Navier-Stokes calculation. The manual describes a computer code for generation of such a coordinate system. The coordinate system is a general nonorthogonal one in which high resolution normal to the airfoil is obtained in the vicinity of the airfoil surface, and high resolution along the airfoil surface is obtained in the vicinity of the airfoil leading edge. The method of generation is a constructive technique which leads to a C type coordinate grid. The method of construction as well as input and output definitions are contained herein. The computer code itself as well as a sample output is being submitted to COSMIC.

  11. Block error correction codes for face recognition

    Science.gov (United States)

    Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.

    2011-06-01

    Face recognition is one of the most desirable biometric-based authentication schemes to control access to sensitive information/locations and as a proof of identity to claim entitlement to services. The aim of this paper is to develop block-based mechanisms, to reduce recognition errors that result from varying illumination conditions with emphasis on using error correction codes. We investigate the modelling of error patterns in different parts/blocks of face images as a result of differences in illumination conditions, and we use appropriate error correction codes to deal with the corresponding distortion. We test the performance of our proposed schemes using the Extended Yale-B Face Database, which consists of face images belonging to 5 illumination subsets depending on the direction of light source from the camera. In our experiments each image is divided into three horizontal regions as follows: region1, three rows above the eyebrows, eyebrows and eyes; region2, nose region and region3, mouth and chin region. By estimating statistical parameters for errors in each region we select suitable BCH error correction codes that yield improved recognition accuracy for that particular region in comparison to applying error correction codes to the entire image. Discrete Wavelet Transform (DWT) to a depth of 3 is used for face feature extraction, followed by global/local binarization of coefficients in each subbands. We shall demonstrate that the use of BCH improves separation of the distribution of Hamming distances of client-client samples from the distribution of Hamming distances of imposter-client samples.

  12. Intra prediction using face continuity in 360-degree video coding

    Science.gov (United States)

    Hanhart, Philippe; He, Yuwen; Ye, Yan

    2017-09-01

    This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.

  13. The ZPIC educational code suite

    Science.gov (United States)

    Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.

    2017-10-01

    Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.

  14. ETR/ITER systems code

    Energy Technology Data Exchange (ETDEWEB)

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.; Bulmer, R.H.; Busigin, A.; DuBois, P.F.; Fenstermacher, M.E.; Fink, J.; Finn, P.A.; Galambos, J.D.; Gohar, Y.; Gorker, G.E.; Haines, J.R.; Hassanein, A.M.; Hicks, D.R.; Ho, S.K.; Kalsi, S.S.; Kalyanam, K.M.; Kerns, J.A.; Lee, J.D.; Miller, J.R.; Miller, R.L.; Myall, J.O.; Peng, Y-K.M.; Perkins, L.J.; Spampinato, P.T.; Strickler, D.J.; Thomson, S.L.; Wagner, C.E.; Willms, R.S.; Reid, R.L. (ed.)

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs.

  15. The Flutter Shutter Code Calculator

    Directory of Open Access Journals (Sweden)

    Yohann Tendero

    2015-08-01

    Full Text Available The goal of the flutter shutter is to make uniform motion blur invertible, by a"fluttering" shutter that opens and closes on a sequence of well chosen sub-intervals of the exposure time interval. In other words, the photon flux is modulated according to a well chosen sequence calledflutter shutter code. This article provides a numerical method that computes optimal flutter shutter codes in terms of mean square error (MSE. We assume that the observed objects follow a known (or learned random velocity distribution. In this paper, Gaussian and uniform velocity distributions are considered. Snapshots are also optimized taking the velocity distribution into account. For each velocity distribution, the gain of the optimal flutter shutter code with respectto the optimal snapshot in terms of MSE is computed. This symmetric optimization of theflutter shutter and of the snapshot allows to compare on an equal footing both solutions, i.e. camera designs. Optimal flutter shutter codes are demonstrated to improve substantially the MSE compared to classic (patented or not codes. A numerical method that permits to perform a reverse engineering of any existing (patented or not flutter shutter codes is also describedand an implementation is given. In this case we give the underlying velocity distribution fromwhich a given optimal flutter shutter code comes from. The combination of these two numerical methods furnishes a comprehensive study of the optimization of a flutter shutter that includes a forward and a backward numerical solution.

  16. Surface code implementation of block code state distillation

    Science.gov (United States)

    Fowler, Austin G.; Devitt, Simon J.; Jones, Cody

    2013-01-01

    State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved |A〉 state given 15 input copies. New block code state distillation methods can produce k improved |A〉 states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three. PMID:23736868

  17. Elevating sampling

    Science.gov (United States)

    Labuz, Joseph M.; Takayama, Shuichi

    2014-01-01

    Sampling – the process of collecting, preparing, and introducing an appropriate volume element (voxel) into a system – is often under appreciated and pushed behind the scenes in lab-on-a-chip research. What often stands in the way between proof-of-principle demonstrations of potentially exciting technology and its broader dissemination and actual use, however, is the effectiveness of sample collection and preparation. The power of micro- and nanofluidics to improve reactions, sensing, separation, and cell culture cannot be accessed if sampling is not equally efficient and reliable. This perspective will highlight recent successes as well as assess current challenges and opportunities in this area. PMID:24781100

  18. Evaluation of ICD-9-CM codes for craniofacial microsomia.

    Science.gov (United States)

    Luquetti, Daniela V; Saltzman, Babette S; Vivaldi, Daniela; Pimenta, Luiz A; Hing, Anne V; Cassell, Cynthia H; Starr, Jacqueline R; Heike, Carrie L

    2012-12-01

    Craniofacial microsomia (CFM) is a congenital condition characterized by microtia and mandibular underdevelopment. Healthcare databases and birth defects surveillance programs could be used to improve knowledge of CFM. However, no specific International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) code exists for this condition, which makes standardized data collection challenging. Our aim was to evaluate the validity of existing ICD-9-CM codes to identify individuals with CFM. Study sample eligibility criteria were developed by an expert panel and matched to 11 ICD-9-CM codes. We queried hospital discharge data from two craniofacial centers and identified a total of 12,254 individuals who had ≥1 potentially CFM-related code(s). We reviewed all (n = 799) medical records identified at the University of North Carolina (UNC) and 500 randomly selected records at Seattle Children's Hospital (SCH). Individuals were classified as a CFM case or non-case. Thirty-two individuals (6%) at SCH and 93 (12%) at UNC met the CFM eligibility criteria. At both centers, 59% of cases and 95% of non-cases had only one code assigned. At both centers, the most frequent codes were 744.23 (microtia), 754.0 and 756.0 (nonspecific codes), and the code 744.23 had a positive predictive value (PPV) >80% and sensitivity >70%. The code 754.0 had a sensitivity of 3% (PPV <1%) at SCH and 36% (PPV = 5%) at UNC, whereas 756.0 had a sensitivity of 38% (PPV = 5%) at SCH and 18% (PPV = 26%) at UNC. These findings suggest the need for a specific CFM code to facilitate CFM surveillance and research. Copyright © 2012 Wiley Periodicals, Inc.

  19. Code-Mixing and Code Switchingin The Process of Learning

    Directory of Open Access Journals (Sweden)

    Diyah Atiek Mustikawati

    2016-09-01

    Full Text Available This study aimed to describe a form of code switching and code mixing specific form found in the teaching and learning activities in the classroom as well as determining factors influencing events stand out that form of code switching and code mixing in question.Form of this research is descriptive qualitative case study which took place in Al Mawaddah Boarding School Ponorogo. Based on the analysis and discussion that has been stated in the previous chapter that the form of code mixing and code switching learning activities in Al Mawaddah Boarding School is in between the use of either language Java language, Arabic, English and Indonesian, on the use of insertion of words, phrases, idioms, use of nouns, adjectives, clauses, and sentences. Code mixing deciding factor in the learning process include: Identification of the role, the desire to explain and interpret, sourced from the original language and its variations, is sourced from a foreign language. While deciding factor in the learning process of code, includes: speakers (O1, partners speakers (O2, the presence of a third person (O3, the topic of conversation, evoke a sense of humour, and just prestige. The significance of this study is to allow readers to see the use of language in a multilingual society, especially in AL Mawaddah boarding school about the rules and characteristics variation in the language of teaching and learning activities in the classroom. Furthermore, the results of this research will provide input to the ustadz / ustadzah and students in developing oral communication skills and the effectiveness of teaching and learning strategies in boarding schools.

  20. Application of containment codes to LMFBRs in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Y.W.

    1977-01-01

    The application of containment codes to predict the response of the fast reactor containment and the primary piping loops to HCDAs is described. Five sample problems are given to illustrate their applications. The first problem deals with the response of the primary containment to an HCDA. The second problem deals with the coolant flow in the reactor lower plenum. The third proem concerns sodium spillage and slug impact. The fourth problem deals with the response of a piping loop. The fifth problem analyzes the response of a reactor head closure. Application of codes in parametric studies and comparison of code predictions with experiments are also discussed.

  1. Color-Coding Politics

    Directory of Open Access Journals (Sweden)

    Benjamin Gross

    2013-02-01

    Full Text Available During the 2000 Presidential election between George H. W. Bush and Al Gore, journalists often used the terms blue states and red states to describe the political landscape within the United States. This article studies the framing of these terms during the years 2004 through 2007. Using latent and manifest qualitative content analyses, six different news media frames were found in a sample of 337 newspaper articles. Two hypotheses were also tested indicating that framing patterns varied slightly by time period and article types. However, the argument that increased levels of political polarization in the United States have been created by predominantly conflict-oriented coverage may not be true. Instead, these terms became journalistic heuristics that were used to organize how people think about politics in a way that fit with contemporary media practices, and there is no single agreed upon interpretation of these terms within this reporting.

  2. FIFPC, a fast ion Fokker--Planck code

    Energy Technology Data Exchange (ETDEWEB)

    Fowler, R.H.; Callen, J.D.; Rome, J.A.; Smith, J.

    1976-07-01

    A computer code is described which solves the Fokker--Planck equation for the velocity space distribution of fast ions injected into a tokamak plasma. The numerical techniques are described and use of the code is outlined. The program is written in FORTRAN IV and is modularized in order to provide greater flexibility to the user. A program listing is provided and the results of sample cases are presented.

  3. Lossless Coding with Generalised Criteria

    CERN Document Server

    Charalambous, Charalambos D; Rezaei, Farzad

    2011-01-01

    This paper presents prefix codes which minimize various criteria constructed as a convex combination of maximum codeword length and average codeword length or maximum redundancy and average redundancy, including a convex combination of the average of an exponential function of the codeword length and the average redundancy. This framework encompasses as a special case several criteria previously investigated in the literature, while relations to universal coding is discussed. The coding algorithm derived is parametric resulting in re-adjusting the initial source probabilities via a weighted probability vector according to a merging rule. The level of desirable merging has implication in applications where the maximum codeword length is bounded.

  4. LiveCode mobile development

    CERN Document Server

    Lavieri, Edward D

    2013-01-01

    A practical guide written in a tutorial-style, ""LiveCode Mobile Development Hotshot"" walks you step-by-step through 10 individual projects. Every project is divided into sub tasks to make learning more organized and easy to follow along with explanations, diagrams, screenshots, and downloadable material.This book is great for anyone who wants to develop mobile applications using LiveCode. You should be familiar with LiveCode and have access to a smartphone. You are not expected to know how to create graphics or audio clips.

  5. Network Coding Fundamentals and Applications

    CERN Document Server

    Medard, Muriel

    2011-01-01

    Network coding is a field of information and coding theory and is a method of attaining maximum information flow in a network. This book is an ideal introduction for the communications and network engineer, working in research and development, who needs an intuitive introduction to network coding and to the increased performance and reliability it offers in many applications. This book is an ideal introduction for the research and development communications and network engineer who needs an intuitive introduction to the theory and wishes to understand the increased performance and reliabil

  6. Writing the Live Coding Book

    DEFF Research Database (Denmark)

    Blackwell, Alan; Cox, Geoff; Lee, Sang Wong

    2016-01-01

    This paper is a speculation on the relationship between coding and writing, and the ways in which technical innovations and capabilities enable us to rethink each in terms of the other. As a case study, we draw on recent experiences of preparing a book on live coding, which integrates a wide range...... of personal, historical, technical and critical perspectives. This book project has been both experimental and reflective, in a manner that allows us to draw on critical understanding of both code and writing, and point to the potential for new practices in the future....

  7. Linear network error correction coding

    CERN Document Server

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  8. i-Review: Sharing Code

    Directory of Open Access Journals (Sweden)

    Jonas Kubilius

    2014-02-01

    Full Text Available Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF. GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  9. Neural Decoder for Topological Codes

    Science.gov (United States)

    Torlai, Giacomo; Melko, Roger G.

    2017-07-01

    We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.

  10. Electronic Code of Federal Regulations

    Data.gov (United States)

    National Archives and Records Administration — The Electronic Code of Federal Regulations (e-CFR) is the codification of the general and permanent rules published in the Federal Register by the executive...

  11. The Serializability of Network Codes

    CERN Document Server

    Blasiak, Anna

    2010-01-01

    Network coding theory studies the transmission of information in networks whose vertices may perform nontrivial encoding and decoding operations on data as it passes through the network. The main approach to deciding the feasibility of network coding problems aims to reduce the problem to optimization over a polytope of entropic vectors subject to constraints imposed by the network structure. In the case of directed acyclic graphs, these constraints are completely understood, but for general graphs the problem of enumerating them remains open: it is not known how to classify the constraints implied by a property that we call serializability, which refers to the absence of paradoxical circular dependencies in a network code. In this work we initiate the first systematic study of the constraints imposed on a network code by serializability. We find that serializability cannot be detected solely by evaluating the Shannon entropy of edge sets in the graph, but nevertheless, we give a polynomial-time algorithm tha...

  12. Tree Coding of Bilevel Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1998-01-01

    probabilities to an arithmetic coder. The conditional probabilities are estimated from co-occurrence statistics of past pixels, the statistics are stored in a tree. By organizing the code length calculations properly, a vast number of possible models (trees) reflecting different pixel orderings can...... be investigated within reasonable time prior to generating the code. A number of general-purpose coders are constructed according to this principle. Rissanen's (1989) one-pass algorithm, context, is presented in two modified versions. The baseline is proven to be a universal coder. The faster version, which...... is one order of magnitude slower than JBIG, obtains excellent and highly robust compression performance. A multipass free tree coding scheme produces superior compression results for all test images. A multipass free template coding scheme produces significantly better results than JBIG for difficult...

  13. FLYCHK Collisional-Radiative Code

    Science.gov (United States)

    SRD 160 FLYCHK Collisional-Radiative Code (Web, free access)   FLYCHK provides a capability to generate atomic level populations and charge state distributions for low-Z to mid-Z elements under NLTE conditions.

  14. Multimedia signal coding and transmission

    CERN Document Server

    Ohm, Jens-Rainer

    2015-01-01

    This textbook covers the theoretical background of one- and multidimensional signal processing, statistical analysis and modelling, coding and information theory with regard to the principles and design of image, video and audio compression systems. The theoretical concepts are augmented by practical examples of algorithms for multimedia signal coding technology, and related transmission aspects. On this basis, principles behind multimedia coding standards, including most recent developments like High Efficiency Video Coding, can be well understood. Furthermore, potential advances in future development are pointed out. Numerous figures and examples help to illustrate the concepts covered. The book was developed on the basis of a graduate-level university course, and most chapters are supplemented by exercises. The book is also a self-contained introduction both for researchers and developers of multimedia compression systems in industry.

  15. The Aesthetics of Code Switching

    National Research Council Canada - National Science Library

    Ali Mohammadi Asiabadi

    2010-01-01

    ... explained. However, the aesthetic aspects of these figures can be shown through several theories that discuss code switching considering the fact that some of the theories are commonly used in literature and literary criticism...

  16. Interlibrary Loan Codes and Guidelines.

    Science.gov (United States)

    RQ, 1980

    1980-01-01

    Presents a model interlibrary loan policy for regional, state, local, and other special groups of libraries; the 1980 national interlibrary loan code; and the 1978 procedural guidelines for international lending. (FM)

  17. Zip Codes - MDC_WCSZipcode

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — The WCSZipcode polygon feature class was created by Miami-Dade Enterprise Technology Department to be used in the WCS batch jobs to assign the actual zip code of...

  18. The Aesthetics of Code Switching

    National Research Council Canada - National Science Library

    Ali Mohammadi Asiabadi

    2010-01-01

    .... However, the aesthetic aspects of these figures can be shown through several theories that discuss code switching considering the fact that some of the theories are commonly used in literature and literary criticism...

  19. Adaptive decoding of convolutional codes

    Directory of Open Access Journals (Sweden)

    K. Hueske

    2007-06-01

    Full Text Available Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  20. Allegheny County Zip Code Boundaries

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This dataset demarcates the zip code boundaries that lie within Allegheny County. These are not clipped to the Allgeheny County boundary. If viewing this...

  1. Universal codes of the natural numbers

    OpenAIRE

    Filmus, Yuval

    2013-01-01

    A code of the natural numbers is a uniquely-decodable binary code of the natural numbers with non-decreasing codeword lengths, which satisfies Kraft's inequality tightly. We define a natural partial order on the set of codes, and show how to construct effectively a code better than a given sequence of codes, in a certain precise sense. As an application, we prove that the existence of a scale of codes (a well-ordered set of codes which contains a code better than any given code) is independen...

  2. Beam-dynamics codes used at DARHT

    Energy Technology Data Exchange (ETDEWEB)

    Ekdahl, Jr., Carl August [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-01

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  3. UNIX code management and distribution

    Energy Technology Data Exchange (ETDEWEB)

    Hung, T.; Kunz, P.F.

    1992-09-01

    We describe a code management and distribution system based on tools freely available for the UNIX systems. At the master site, version control is managed with CVS, which is a layer on top of RCS, and distribution is done via NFS mounted file systems. At remote sites, small modifications to CVS provide for interactive transactions with the CVS system at the master site such that remote developers are true peers in the code development process.

  4. Verification of ONED90 code

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Jong Hwa; Lee, Ki Bog; Zee, Sung Kyun; Lee, Chang Ho [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1993-12-01

    ONED90 developed by KAERI is a 1-dimensional 2-group diffusion theory code. For nuclear design and reactor simulation, the usage of ONED90 encompasses core follow calculation, load follow calculation, plant power control simulation, xenon oscillation simulation and control rod maneuvering, etc. In order to verify the validity of ONED90 code, two well-known benchmark problems are solved by ONED90 shows very similar result to reference solution. (Author) 11 refs., 5 figs., 13 tabs.

  5. Training course on code implementation.

    Science.gov (United States)

    Allain, A; De Arango, R

    1992-01-01

    The International Baby Food Action Network (IBFAN) is a coalition of over 40 citizen groups in 70 countries. IBFAN monitors the progress worldwide of the implementation of the International Code of Marketing of Breastmilk Substitutes. The Code is intended to regulate the advertising and promotional techniques used to sell infant formula. The 1991 IBFAN report shows that 75 countries have taken some action to implement the International Code. During 1992, the IBFAN Code Documentation Center in Malaysia conducted 2 training courses to help countries draft legislation to implement and monitor compliance with the International Code. In April, government officials from 19 Asian and African countries attended the first course in Malaysia; the second course was conducted in Spanish in Guatemala and attended by officials from 15 Latin American and Caribbean countries. The resource people included representatives from NGOs in Africa, Asia, Latin America, Europe and North America with experience in Code implementation and monitoring at the national level. The main purpose of each course was to train government officials to use the International Code as a starting point for national legislation to protect breastfeeding. Participants reviewed recent information on lactation management, the advantages of breastfeeding, current trends in breastfeeding and the marketing practices of infant formula manufacturers. The participants studied the terminology contained in the International Code and terminology used by infant formula manufacturers to include breastmilk supplements such as follow-on formulas and cereal-based baby foods. Relevant World Health Assembly resolutions such as the one adopted in 1986 on the need to ban free and low-cost supplies to hospitals were examined. The legal aspects of the current Baby Friendly Hospital Initiative (BFHI) and the progress in the 12 BFHI test countries concerning the elimination of supplies were also examined. International Labor

  6. Continuous speech recognition with sparse coding

    CSIR Research Space (South Africa)

    Smit, WJ

    2009-04-01

    Full Text Available Sparse coding is an efficient way of coding information. In a sparse code most of the code elements are zero; very few are active. Sparse codes are intended to correspond to the spike trains with which biological neurons communicate. In this article...

  7. Facilitating Internet-Scale Code Retrieval

    Science.gov (United States)

    Bajracharya, Sushil Krishna

    2010-01-01

    Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…

  8. Error-erasure decoding of product codes.

    Science.gov (United States)

    Wainberg, S.

    1972-01-01

    Two error-erasure decoding algorithms for product codes that correct all the error-erasure patterns guaranteed correctable by the minimum Hamming distance of the product code are given. The first algorithm works when at least one of the component codes is majority-logic decodable. The second algorithm works for any product code. Both algorithms use the decoders of the component codes.

  9. Sensorimotor transformation via sparse coding

    Science.gov (United States)

    Takiyama, Ken

    2015-01-01

    Sensorimotor transformation is indispensable to the accurate motion of the human body in daily life. For instance, when we grasp an object, the distance from our hands to an object needs to be calculated by integrating multisensory inputs, and our motor system needs to appropriately activate the arm and hand muscles to minimize the distance. The sensorimotor transformation is implemented in our neural systems, and recent advances in measurement techniques have revealed an important property of neural systems: a small percentage of neurons exhibits extensive activity while a large percentage shows little activity, i.e., sparse coding. However, we do not yet know the functional role of sparse coding in sensorimotor transformation. In this paper, I show that sparse coding enables complete and robust learning in sensorimotor transformation. In general, if a neural network is trained to maximize the performance on training data, the network shows poor performance on test data. Nevertheless, sparse coding renders compatible the performance of the network on both training and test data. Furthermore, sparse coding can reproduce reported neural activities. Thus, I conclude that sparse coding is necessary and a biologically plausible factor in sensorimotor transformation. PMID:25923980

  10. Continuous Non-malleable Codes

    DEFF Research Database (Denmark)

    Faust, Sebastian; Mukherjee, Pratyay; Nielsen, Jesper Buus

    2014-01-01

    Non-malleable codes are a natural relaxation of error correcting/ detecting codes that have useful applications in the context of tamper resilient cryptography. Informally, a code is non-malleable if an adversary trying to tamper with an encoding of a given message can only leave it unchanged......-malleable codes where the adversary only is allowed to tamper a single time with an encoding. We show how to construct continuous non-malleable codes in the common split-state model where an encoding consist of two parts and the tampering can be arbitrary but has to be independent with both parts. Our main...... contributions are outlined below: We propose a new uniqueness requirement of split-state codes which states that it is computationally hard to find two codewords X = (X 0,X 1) and X′ = (X 0,X 1′) such that both codewords are valid, but X 0 is the same in both X and X′. A simple attack shows that uniqueness...

  11. The cosmic code comparison project

    Energy Technology Data Exchange (ETDEWEB)

    Heitmann, Katrin; Fasel, Patricia; Habib, Salman; Warren, Michael S; Ahrens, James; Ankeny, Lee; O' Shea, Brian [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Lukic, Zarija; Ricker, Paul M [Department of Astronomy, University of Illinois, Urbana, IL 61801 (United States); White, Martin [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States); Armstrong, Ryan [Department of Computer Science, UC Davis, Davis, CA 95616 (United States); Springel, Volker [Max-Planck-Institute for Astrophysics, 85741 Garching (Germany); Stadel, Joachim [Institute of Theoretical Physics, University of Zurich, 8057 Zurich (Switzerland); Trac, Hy [Department of Astrophysical Sciences, Princeton University, NJ 08544 (United States)], E-mail: heitmann@lanl.gov

    2008-10-01

    Current and upcoming cosmological observations allow us to probe structures on smaller and smaller scales, entering highly nonlinear regimes. In order to obtain theoretical predictions in these regimes, large cosmological simulations have to be carried out. The promised high accuracy from observations makes the simulation task very demanding: the simulations have to be at least as accurate as the observations. This requirement can only be fulfilled by carrying out an extensive code verification program. The first step of such a program is the comparison of different cosmology codes including gravitational interactions only. In this paper, we extend a recently carried out code comparison project to include five more simulation codes. We restrict our analysis to a small cosmological volume which allows us to investigate properties of halos. For the matter power spectrum and the mass function, the previous results hold, with the codes agreeing at the 10% level over wide dynamic ranges. We extend our analysis to the comparison of halo profiles and investigate the halo count as a function of local density. We introduce and discuss ParaView as a flexible analysis tool for cosmological simulations, the use of which immensely simplifies the code comparison task.

  12. The EISCAT meteor code

    Directory of Open Access Journals (Sweden)

    G. Wannberg

    2008-08-01

    Full Text Available The EISCAT UHF system has the unique capability to determine meteor vector velocities from the head echo Doppler shifts measured at the three sites. Since even meteors spending a very short time in the common volume produce analysable events, the technique lends itself ideally to mapping the orbits of meteors arriving from arbitrary directions over most of the upper hemisphere.

    A radar mode optimised for this application was developed in 2001/2002. A specially selected low-sidelobe 32-bit pseudo-random binary sequence is used to binary phase shift key (BPSK the transmitted carrier. The baud-length is 2.4 μs and the receiver bandwidth is 1.6 MHz to accommodate both the resulting modulation bandwidth and the target Doppler shift. Sampling is at 0.6 μs, corresponding to 90-m range resolution. Target range and Doppler velocity are extracted from the raw data in a multi-step matched-filter procedure. For strong (SNR>5 events the Doppler velocity standard deviation is 100–150 m/s. The effective range resolution is about 30 m, allowing very accurate time-of-flight velocity estimates. On average, Doppler and time-of-flight (TOF velocities agree to within about one part in 103. Two or more targets simultaneously present in the beam can be resolved down to a range separation <300 m as long as their Doppler shifts differ by more than a few km/s.

  13. Revised numerical wrapper for PIES code

    Science.gov (United States)

    Raburn, Daniel; Reiman, Allan; Monticello, Donald

    2015-11-01

    A revised external numerical wrapper has been developed for the Princeton Iterative Equilibrium Solver (PIES code), which is capable of calculating 3D MHD equilibria with islands. The numerical wrapper has been demonstrated to greatly improve the rate of convergence in numerous cases corresponding to equilibria in the TFTR device where magnetic islands are present. The numerical wrapper makes use of a Jacobian-free Newton-Krylov solver along with adaptive preconditioning and a sophisticated subspace-restricted Levenberg-Marquardt backtracking algorithm. The details of the numerical wrapper and several sample results are presented.

  14. SRAC95; general purpose neutronics code system

    Energy Technology Data Exchange (ETDEWEB)

    Okumura, Keisuke; Tsuchihashi, Keichiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kaneko, Kunio

    1996-03-01

    SRAC is a general purpose neutronics code system applicable to core analyses of various types of reactors. Since the publication of JAERI-1302 for the revised SRAC in 1986, a number of additions and modifications have been made for nuclear data libraries and programs. Thus, the new version SRAC95 has been completed. The system consists of six kinds of nuclear data libraries(ENDF/B-IV, -V, -VI, JENDL-2, -3.1, -3.2), five modular codes integrated into SRAC95; collision probability calculation module (PIJ) for 16 types of lattice geometries, Sn transport calculation modules(ANISN, TWOTRAN), diffusion calculation modules(TUD, CITATION) and two optional codes for fuel assembly and core burn-up calculations(newly developed ASMBURN, revised COREBN). In this version, many new functions and data are implemented to support nuclear design studies of advanced reactors, especially for burn-up calculations. SRAC95 is available not only on conventional IBM-compatible computers but also on scalar or vector computers with the UNIX operating system. This report is the SRAC95 users manual which contains general description, contents of revisions, input data requirements, detail information on usage, sample input data and list of available libraries. (author).

  15. Analyzing a School Dress Code in a Junior High School: A Set of Exercises.

    Science.gov (United States)

    East, Maurice A.; And Others

    Five exercises based on a sample school dress code were designed from a political science perspective to help students develop skills in analyzing issues. The exercises are intended to be used in five or more class periods. In the first exercise, students read a sample dress code and name groups of people who might have opinions about it. In…

  16. Codes That Support Smart Growth Development

    Science.gov (United States)

    Provides examples of local zoning codes that support smart growth development, categorized by: unified development code, form-based code, transit-oriented development, design guidelines, street design standards, and zoning overlay.

  17. Convolutional Goppa codes defined on fibrations

    CERN Document Server

    Curto, J I Iglesias; Martín, F J Plaza; Sotelo, G Serrano

    2010-01-01

    We define a new class of Convolutional Codes in terms of fibrations of algebraic varieties generalizaing our previous constructions of Convolutional Goppa Codes. Using this general construction we can give several examples of Maximum Distance Separable (MDS) Convolutional Codes.

  18. 75 FR 19944 - International Code Council: The Update Process for the International Codes and Standards

    Science.gov (United States)

    2010-04-16

    ... National Institute of Standards and Technology International Code Council: The Update Process for the International Codes and Standards AGENCY: National Institute of Standards and Technology, Commerce. ACTION: Notice. SUMMARY: The International Code Council (ICC), promulgator of the International Codes and...

  19. Evaluation of Code Blue Implementation Outcomes

    OpenAIRE

    Bengü Özütürk; Nalan Muhammedoğlu; Emel Dal; Berna Çalışkan

    2015-01-01

    Aim: In this study, we aimed to emphasize the importance of Code Blue implementation and to determine deficiencies in this regard. Methods: After obtaining the ethics committee approval, 225 patient’s code blue call data between 2012 and 2014 January were retrospectively analyzed. Age and gender of the patients, date and time of the call and the clinics giving Code Blue, the time needed for the Code Blue team to arrive, the rates of false Code Blue calls, reasons for Code...

  20. Quasi-cyclic unit memory convolutional codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Paaske, Erik; Ballan, Mark

    1990-01-01

    Unit memory convolutional codes with generator matrices, which are composed of circulant submatrices, are introduced. This structure facilitates the analysis of efficient search for good codes. Equivalences among such codes and some of the basic structural properties are discussed. In particular......, catastrophic encoders and minimal encoders are characterized and dual codes treated. Further, various distance measures are discussed, and a number of good codes, some of which result from efficient computer search and some of which result from known block codes, are presented...

  1. On Code Parameters and Coding Vector Representation for Practical RLNC

    DEFF Research Database (Denmark)

    Heide, Janus; Pedersen, Morten Videbæk; Fitzek, Frank

    2011-01-01

    RLNC provides a theoretically efficient method for coding. The drawbacks associated with it are the complexity of the decoding and the overhead resulting from the encoding vector. Increasing the field size and generation size presents a fundamental trade-off between packet-based throughput...... to higher energy consumption. Therefore, the optimal trade-off is system and topology dependent, as it depends on the cost in energy of performing coding operations versus transmitting data. We show that moderate field sizes are the correct choice when trade-offs are considered. The results show that sparse...

  2. A Connection between Network Coding and Convolutional Codes

    OpenAIRE

    Fragouli, C.; Soljanin, E.

    2004-01-01

    The min-cut, max-flow theorem states that a source node can send a commodity through a network to a sink node at the rate determined by the flow of the min-cut separating the source and the sink. Recently it has been shown that by liner re-encoding at nodes in communications networks, the min-cut rate can be also achieved in multicasting to several sinks. In this paper we discuss connections between such coding schemes and convolutional codes. We propose a method to simplify the convolutional...

  3. Combinatorial polarization, code loops, and codes of high level

    Directory of Open Access Journals (Sweden)

    Petr Vojtěchovský

    2004-07-01

    Full Text Available We first find the combinatorial degree of any map f:V→F, where F is a finite field and V is a finite-dimensional vector space over F. We then simplify and generalize a certain construction, due to Chein and Goodaire, that was used in characterizing code loops as finite Moufang loops that possess at most two squares. The construction yields binary codes of high divisibility level with prescribed Hamming weights of intersections of codewords.

  4. JEMs and incompatible occupational coding systems: effect of manual and automatic recoding of job codes on exposure assignment.

    Science.gov (United States)

    Koeman, Tom; Offermans, Nadine S M; Christopher-de Vries, Yvette; Slottje, Pauline; Van Den Brandt, Piet A; Goldbohm, R Alexandra; Kromhout, Hans; Vermeulen, Roel

    2013-01-01

    In epidemiological studies, occupational exposure estimates are often assigned through linkage of job histories to job-exposure matrices (JEMs). However, available JEMs may have a coding system incompatible with the coding system used to code the job histories, necessitating a translation of the originally assigned job codes. Since manual recoding is usually not feasible in large studies, this is often done by use of automated crosswalks translating job codes from one system to another. We set out to investigate whether automatically translating job codes led to different exposure estimates compared with those resulting from manual recoding using the original job descriptions. One hundred job histories were randomly drawn from the Netherlands Cohort Study on diet and cancer (NLCS), using a sampling strategy designed to oversample potentially exposed jobs. This resulted in 220 job codes that were automatically translated from the original Dutch coding system to the International Standard Classification of Occupations (ISCO)-68 and ISCO-88 as well as manually recoded from the job descriptions in the original questionnaire by two coders. Exposure to several agents (i.e. chromium, asbestos, silica, pesticides, aromatic solvents, and extremely low-frequency magnetic fields) was assigned by JEMs based on job codes resulting from automatic and manual recodings. The agreement between occupational exposure estimates based on the crosswalk versus those based on manual recoding reached a Cohen's Kappa (κ) of 0.66 or higher and were similar to the agreements between the two coders. Results of this study indicate that using automated crosswalks to recode job codes from one occupational classification system to another results only in a limited loss in agreement in assigned occupational exposure estimates compared with direct manual recoding. Therefore, in this case, crosswalks provide an efficient alternative to the costly and time-consuming direct manual recoding from job

  5. Ecoacoustic codes and ecological complexity.

    Science.gov (United States)

    Farina, Almo

    2018-02-01

    Multi-layer communication and sensing network assures the exchange of relevant information between animals and their umwelten, imparting complexity to the ecological systems. Individual soniferous species, the acoustic community, and soundscape are the three main operational levels that comprise this multi-layer network. Acoustic adaptation and acoustic niche are two more important mechanisms that regulate the acoustic performances at the first level while the acoustic community model explains the complexity of the interspecific acoustic network at the second level. Acoustic habitat and ecoacoustic events are two of the most relevant mechanisms that operate at the third level. The exchange of ecoacoustic information on each of these levels is assured by ecoacoustic codes. At the level of individual sonifeorus species, a dyadic intraspecific exchange of information is established between an emitter and a receiver. Ecoacoustic codes discriminate, identify, and label specific signals that pertain to the theme, variation, motif repetition, and intensity of signals. At the acoustic community level, a voluntarily or involuntarily communication is established between networks of interspecific emitters and receivers. Ecoacoustic codes at this level transmit information (e.g., recognition of predators, location of food sources, availability and location of refuges) between one species and the acoustically interacting community and impart cohesion to interspecific assemblages. At the soundscape level, acoustic information is transferred from a mosaic of geophonies, biophonies, and technophonies to different species that discriminate meaningful ecoacoustic events and their temporal dynamics during habitat selection processes. Ecoacoustic codes at this level operate on a limited set of signals from the environmental acoustic dynamic that are heterogeneous in time and space, and these codes are interpreted differently according to the species during habitat selection and the

  6. The SHIELD11 Computer Code

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, W

    2005-02-02

    SHIELD11 is a computer code for performing shielding analyses around a high-energy electron accelerator. It makes use of simple analytic expressions for the production and attenuation of photons and neutrons resulting from electron beams striking thick targets, such as dumps, stoppers, collimators, and other beam devices. The formulae in SHIELD11 are somewhat unpretentious in that they are based on the extrapolation (scaling) of experimental data using rather simple physics ideas. Because these scaling methods have only been tested over a rather limited set of conditions--namely, 1-15 GeV electrons striking 10-20 radiation lengths of iron--a certain amount of care and judgment must be exercised whenever SHIELD11 is used. Nevertheless, for many years these scaling methods have been applied rather successfully to a large variety of problems at SLAC, as well as at other laboratories throughout the world, and the SHIELD11 code has been found to be a fast and convenient tool. In this paper we present, without extensive theoretical justification or experimental verification, the five-component model on which the SHIELD11 code is based. Our intent is to demonstrate how to use the code by means of a few simple examples. References are provided that are considered to be essential for a full understanding of the model. The code itself contains many comments to provide some guidance for the informed user, who may wish to improve on the model.

  7. Index coding via linear programming

    CERN Document Server

    Blasiak, Anna; Lubetzky, Eyal

    2010-01-01

    Index Coding has received considerable attention recently motivated in part by applications such as fast video-on-demand and efficient communication in wireless networks and in part by its connection to Network Coding. The basic setting of Index Coding encodes the side-information relation, the problem input, as an undirected graph and the fundamental parameter is the broadcast rate $\\beta$, the average communication cost per bit for sufficiently long messages (i.e. the non-linear vector capacity). Recent nontrivial bounds on $\\beta$ were derived from the study of other Index Coding capacities (e.g. the scalar capacity $\\beta_1$) by Bar-Yossef et al (FOCS'06), Lubetzky and Stav (FOCS'07) and Alon et al (FOCS'08). However, these indirect bounds shed little light on the behavior of $\\beta$ and its exact value remained unknown for \\emph{any graph} where Index Coding is nontrivial. Our main contribution is a hierarchy of linear programs whose solutions trap $\\beta$ between them. This enables a direct information-...

  8. Using Binary Code Instrumentation in Computer Security

    Directory of Open Access Journals (Sweden)

    Marius POPA

    2013-01-01

    Full Text Available The paper approaches the low-level details of the code generated by compilers whose format permits outside actions. Binary code modifications are manually done when the internal format is known and understood, or automatically by certain tools developed to process the binary code. The binary code instrumentation goals may be various from security increasing and bug fixing to development of malicious software. The paper highlights the binary code instrumentation techniques by code injection to increase the security and reliability of a software application. Also, the paper offers examples for binary code formats understanding and how the binary code injection may be applied.

  9. Requirements of a Better Secure Program Coding

    Directory of Open Access Journals (Sweden)

    Marius POPA

    2012-01-01

    Full Text Available Secure program coding refers to how manage the risks determined by the security breaches because of the program source code. The papers reviews the best practices must be doing during the software development life cycle for secure software assurance, the methods and techniques used for a secure coding assurance, the most known and common vulnerabilities determined by a bad coding process and how the security risks are managed and mitigated. As a tool of the better secure program coding, the code review process is presented, together with objective measures for code review assurance and estimation of the effort for the code improvement.

  10. On the Dimension of Graph Codes with Reed–Solomon Component Codes

    DEFF Research Database (Denmark)

    Beelen, Peter; Høholdt, Tom; Pinero, Fernando

    2013-01-01

    We study a class of graph based codes with Reed-Solomon component codes as affine variety codes. We give a formulation of the exact dimension of graph codes in general. We give an algebraic description of these codes which makes the exact computation of the dimension of the graph codes easier....

  11. A genetic scale of reading frame coding.

    Science.gov (United States)

    Michel, Christian J

    2014-08-21

    The reading frame coding (RFC) of codes (sets) of trinucleotides is a genetic concept which has been largely ignored during the last 50 years. A first objective is the definition of a new and simple statistical parameter PrRFC for analysing the probability (efficiency) of reading frame coding (RFC) of any trinucleotide code. A second objective is to reveal different classes and subclasses of trinucleotide codes involved in reading frame coding: the circular codes of 20 trinucleotides and the bijective genetic codes of 20 trinucleotides coding the 20 amino acids. This approach allows us to propose a genetic scale of reading frame coding which ranges from 1/3 with the random codes (RFC probability identical in the three frames) to 1 with the comma-free circular codes (RFC probability maximal in the reading frame and null in the two shifted frames). This genetic scale shows, in particular, the reading frame coding probabilities of the 12,964,440 circular codes (PrRFC=83.2% in average), the 216 C(3) self-complementary circular codes (PrRFC=84.1% in average) including the code X identified in eukaryotic and prokaryotic genes (PrRFC=81.3%) and the 339,738,624 bijective genetic codes (PrRFC=61.5% in average) including the 52 codes without permuted trinucleotides (PrRFC=66.0% in average). Otherwise, the reading frame coding probabilities of each trinucleotide code coding an amino acid with the universal genetic code are also determined. The four amino acids Gly, Lys, Phe and Pro are coded by codes (not circular) with RFC probabilities equal to 2/3, 1/2, 1/2 and 2/3, respectively. The amino acid Leu is coded by a circular code (not comma-free) with a RFC probability equal to 18/19. The 15 other amino acids are coded by comma-free circular codes, i.e. with RFC probabilities equal to 1. The identification of coding properties in some classes of trinucleotide codes studied here may bring new insights in the origin and evolution of the genetic code. Copyright © 2014 Elsevier

  12. ASME Code Efforts Supporting HTGRs

    Energy Technology Data Exchange (ETDEWEB)

    D.K. Morton

    2012-09-01

    In 1999, an international collaborative initiative for the development of advanced (Generation IV) reactors was started. The idea behind this effort was to bring nuclear energy closer to the needs of sustainability, to increase proliferation resistance, and to support concepts able to produce energy (both electricity and process heat) at competitive costs. The U.S. Department of Energy has supported this effort by pursuing the development of the Next Generation Nuclear Plant, a high temperature gas-cooled reactor. This support has included research and development of pertinent data, initial regulatory discussions, and engineering support of various codes and standards development. This report discusses the various applicable American Society of Mechanical Engineers (ASME) codes and standards that are being developed to support these high temperature gascooled reactors during construction and operation. ASME is aggressively pursuing these codes and standards to support an international effort to build the next generation of advanced reactors so that all can benefit.

  13. FLOWTRAN-TF code description

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.P. (ed.)

    1990-12-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss of Coolant Accident (LOCA). This report provides a brief description of the physical models in the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit. This document is viewed as an interim report and should ultimately be superseded by a comprehensive user/programmer manual. In general, only high level discussions of governing equations and constitutive laws are presented. Numerical implementation of these models, code architecture and user information are not generally covered. A companion document describing code benchmarking is available.

  14. FLOWTRAN-TF code description

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.P. (ed.)

    1991-09-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss of Coolant Accident (LOCA). This report provides a brief description of the physical models in the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit. This document is viewed as an interim report and should ultimately be superseded by a comprehensive user/programmer manual. In general, only high level discussions of governing equations and constitutive laws are presented. Numerical implementation of these models, code architecture and user information are not generally covered. A companion document describing code benchmarking is available.

  15. Halftone Coding with JBIG2

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    2000-01-01

    The emerging international standard for compression of bilevel images and bi-level documents, JBIG2,provides a mode dedicated for lossy coding of halftones. The encoding procedure involves descreening of the bi-levelimage into gray-scale, encoding of the gray-scale image, and construction...... of a halftone pattern dictionary.The decoder first decodes the gray-scale image. Then for each gray-scale pixel looks up the corresponding halftonepattern in the dictionary and places it in the reconstruction bitmap at the position corresponding to the gray-scale pixel. The coding method is inherently lossy...... and care must be taken to avoid introducing artifacts in the reconstructed image. We describe how to apply this coding method for halftones created by periodic ordered dithering, by clustered dot screening (offset printing), and by techniques which in effect dithers with blue noise, e.g., error diffusion...

  16. MAGNETOHYDRODYNAMIC EQUATIONS (MHD GENERATION CODE

    Directory of Open Access Journals (Sweden)

    Francisco Frutos Alfaro

    2017-04-01

    Full Text Available A program to generate codes in Fortran and C of the full magnetohydrodynamic equations is shown. The program uses the free computer algebra system software REDUCE. This software has a package called EXCALC, which is an exterior calculus program. The advantage of this program is that it can be modified to include another complex metric or spacetime. The output of this program is modified by means of a LINUX script which creates a new REDUCE program to manipulate the magnetohydrodynamic equations to obtain a code that can be used as a seed for a magnetohydrodynamic code for numerical applications. As an example, we present part of the output of our programs for Cartesian coordinates and how to do the discretization.

  17. Ultrasound imaging using coded signals

    DEFF Research Database (Denmark)

    Misaridis, Athanasios

    Modulated (or coded) excitation signals can potentially improve the quality and increase the frame rate in medical ultrasound scanners. The aim of this dissertation is to investigate systematically the applicability of modulated signals in medical ultrasound imaging and to suggest appropriate...... of the excitation signal. Although a gain in signal-to-noise ratio of about 20 dB is theoretically possible for the time-bandwidth product available in ultrasound, it is shown that the effects of transducer weighting and tissue attenuation reduce the maximum gain at 10 dB for robust compression with low sidelobes...... is described. Application of coded excitation in array imaging is evaluated through simulations in Field II. The low degree of the orthogonality among coded signals for ultrasound systems is first discussed, and the effect of mismatched filtering in the cross-correlation properties of the signals is evaluated...

  18. ASME Code Efforts Supporting HTGRs

    Energy Technology Data Exchange (ETDEWEB)

    D.K. Morton

    2011-09-01

    In 1999, an international collaborative initiative for the development of advanced (Generation IV) reactors was started. The idea behind this effort was to bring nuclear energy closer to the needs of sustainability, to increase proliferation resistance, and to support concepts able to produce energy (both electricity and process heat) at competitive costs. The U.S. Department of Energy has supported this effort by pursuing the development of the Next Generation Nuclear Plant, a high temperature gas-cooled reactor. This support has included research and development of pertinent data, initial regulatory discussions, and engineering support of various codes and standards development. This report discusses the various applicable American Society of Mechanical Engineers (ASME) codes and standards that are being developed to support these high temperature gascooled reactors during construction and operation. ASME is aggressively pursuing these codes and standards to support an international effort to build the next generation of advanced reactors so that all can benefit.

  19. ASME Code Efforts Supporting HTGRs

    Energy Technology Data Exchange (ETDEWEB)

    D.K. Morton

    2010-09-01

    In 1999, an international collaborative initiative for the development of advanced (Generation IV) reactors was started. The idea behind this effort was to bring nuclear energy closer to the needs of sustainability, to increase proliferation resistance, and to support concepts able to produce energy (both electricity and process heat) at competitive costs. The U.S. Department of Energy has supported this effort by pursuing the development of the Next Generation Nuclear Plant, a high temperature gas-cooled reactor. This support has included research and development of pertinent data, initial regulatory discussions, and engineering support of various codes and standards development. This report discusses the various applicable American Society of Mechanical Engineers (ASME) codes and standards that are being developed to support these high temperature gascooled reactors during construction and operation. ASME is aggressively pursuing these codes and standards to support an international effort to build the next generation of advanced reactors so that all can benefit.

  20. language choice, code-switching and code- mixing in biase

    African Journals Online (AJOL)

    Ada

    switching and code- mixing in a multi-lingual Biase Local Government Area in Cross River State, Nigeria. It looks at the different languages spoken in Biase - from the local languages which serve as mother tongues (MT/L1) to other languages in use in ...

  1. Stakeholders' Opinions on the use of Code Switching/ Code Mixing ...

    African Journals Online (AJOL)

    This paper focuses on the opinions of stakeholders on the use of codeswitching for teaching and learning in Tanzania secondary schools althoughexaminations are set in English. English-Kiswahili code switching is employedintensively in the classrooms by both teachers and learners, as a coping strategy toattain ...

  2. Signal Constellations for Multilevel Coded Modulation with Sparse Graph Codes

    NARCIS (Netherlands)

    Cronie, H.S.

    2005-01-01

    A method to combine error-correction coding and spectral efficient modulation for transmission over channels with Gaussian noise is presented. The method of modulation leads to a signal constellation in which the constellation symbols have a nonuniform distribution. This gives a so-called shape gain

  3. Opportunistic Adaptive Transmission for Network Coding Using Nonbinary LDPC Codes

    Directory of Open Access Journals (Sweden)

    Cocco Giuseppe

    2010-01-01

    Full Text Available Network coding allows to exploit spatial diversity naturally present in mobile wireless networks and can be seen as an example of cooperative communication at the link layer and above. Such promising technique needs to rely on a suitable physical layer in order to achieve its best performance. In this paper, we present an opportunistic packet scheduling method based on physical layer considerations. We extend channel adaptation proposed for the broadcast phase of asymmetric two-way bidirectional relaying to a generic number of sinks and apply it to a network context. The method consists of adapting the information rate for each receiving node according to its channel status and independently of the other nodes. In this way, a higher network throughput can be achieved at the expense of a slightly higher complexity at the transmitter. This configuration allows to perform rate adaptation while fully preserving the benefits of channel and network coding. We carry out an information theoretical analysis of such approach and of that typically used in network coding. Numerical results based on nonbinary LDPC codes confirm the effectiveness of our approach with respect to previously proposed opportunistic scheduling techniques.

  4. Some partial-unit-memory convolutional codes

    Science.gov (United States)

    Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.

    1991-01-01

    The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.

  5. Validation issues for SSI codes

    Energy Technology Data Exchange (ETDEWEB)

    Philippacopoulos, A.J.

    1995-02-01

    The paper describes the results of a recent work which was performed to verify computer code predictions in the SSI area. The first part of the paper is concerned with analytic solutions of the system response. The mathematical derivations are reasonably reduced by the use of relatively simple models which capture fundamental ingredients of the physics of the system motion while allowing for the response to be obtained analytically. Having established explicit forms of the system response, numerical solutions from three computer codes are presented in comparative format.

  6. List Decoding of Matrix-Product Codes from nested codes: an application to Quasi-Cyclic codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Høholdt, Tom; Ruano, Diego

    2012-01-01

    A list decoding algorithm for matrix-product codes is provided when $C_1,..., C_s$ are nested linear codes and $A$ is a non-singular by columns matrix. We estimate the probability of getting more than one codeword as output when the constituent codes are Reed-Solomon codes. We extend this list...

  7. An Analysis of Statewide Adoption Rates of Building Energy Code by Local Jurisdictions

    Energy Technology Data Exchange (ETDEWEB)

    Cort, Katherine A.; Butner, Ryan S.

    2012-12-31

    The purpose of this study is to generally inform the U.S. Department of Energy’s Building Energy Codes Program of the local, effective energy code adoption rate for a sample set of 21 states, some which have adopted statewide codes and some that have not. Information related to the residential energy code adoption process and status at the local jurisdiction was examined for each of the states. Energy code status information was gathered for approximately 2,800 jurisdictions, which effectively covered approximately 80 percent of the new residential building construction in the 21 states included in the study.

  8. Coding Strategies and Implementations of Compressive Sensing

    Science.gov (United States)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or

  9. Code blue: what to do?

    Science.gov (United States)

    Porteous, Joan

    2009-09-01

    Cardiac arrest may occur intraoperatively at any time. The purpose of this article is to help the reader recognize and assist in the management of an intraoperative cardiac arrest. Patients who are at risk for cardiac arrest in the OR are identified and different types of pulseless arrythmias are identified. Roles of perioperative personnel are suggested and documentation during the code is discussed.

  10. Coding as literacy metalithikum IV

    CERN Document Server

    Bühlmann, Vera; Moosavi, Vahid

    2015-01-01

    Recent developments in computer science, particularly "data-driven procedures" have opened a new level of design and engineering. This has also affected architecture. The publication collects contributions on Coding as Literacy by computer scientists, mathematicians, philosophers, cultural theorists, and architects. "Self-Organizing Maps" (SOM) will serve as the concrete reference point for all further discussions.

  11. Coding and English Language Teaching

    Science.gov (United States)

    Stevens, Vance; Verschoor, Jennifer

    2017-01-01

    According to Dudeney, Hockly, and Pegrum (2013) coding is a deeper skill subsumed under the four main digital literacies of language, connections, information, and (re)design. Coders or programmers are people who write the programmes behind everything we see and do on a computer. Most students spend several hours playing online games, but few know…

  12. Smells in software test code

    NARCIS (Netherlands)

    Garousi, Vahid; Küçük, Barış

    2018-01-01

    As a type of anti-pattern, test smells are defined as poorly designed tests and their presence may negatively affect the quality of test suites and production code. Test smells are the subject of active discussions among practitioners and researchers, and various guidelines to handle smells are

  13. Reusable State Machine Code Generator

    Science.gov (United States)

    Hoffstadt, A. A.; Reyes, C.; Sommer, H.; Andolfato, L.

    2010-12-01

    The State Machine model is frequently used to represent the behaviour of a system, allowing one to express and execute this behaviour in a deterministic way. A graphical representation such as a UML State Chart diagram tames the complexity of the system, thus facilitating changes to the model and communication between developers and domain experts. We present a reusable state machine code generator, developed by the Universidad Técnica Federico Santa María and the European Southern Observatory. The generator itself is based on the open source project architecture, and uses UML State Chart models as input. This allows for a modular design and a clean separation between generator and generated code. The generated state machine code has well-defined interfaces that are independent of the implementation artefacts such as the middle-ware. This allows using the generator in the substantially different observatory software of the Atacama Large Millimeter Array and the ESO Very Large Telescope. A project-specific mapping layer for event and transition notification connects the state machine code to its environment, which can be the Common Software of these projects, or any other project. This approach even allows to automatically create tests for a generated state machine, using techniques from software testing, such as path-coverage.

  14. Code Properties from Holographic Geometries

    Directory of Open Access Journals (Sweden)

    Fernando Pastawski

    2017-05-01

    Full Text Available Almheiri, Dong, and Harlow [J. High Energy Phys. 04 (2015 163.JHEPFG1029-847910.1007/JHEP04(2015163] proposed a highly illuminating connection between the AdS/CFT holographic correspondence and operator algebra quantum error correction (OAQEC. Here, we explore this connection further. We derive some general results about OAQEC, as well as results that apply specifically to quantum codes that admit a holographic interpretation. We introduce a new quantity called price, which characterizes the support of a protected logical system, and find constraints on the price and the distance for logical subalgebras of quantum codes. We show that holographic codes defined on bulk manifolds with asymptotically negative curvature exhibit uberholography, meaning that a bulk logical algebra can be supported on a boundary region with a fractal structure. We argue that, for holographic codes defined on bulk manifolds with asymptotically flat or positive curvature, the boundary physics must be highly nonlocal, an observation with potential implications for black holes and for quantum gravity in AdS space at distance scales that are small compared to the AdS curvature radius.

  15. QR Codes: Taking Collections Further

    Science.gov (United States)

    Ahearn, Caitlin

    2014-01-01

    With some thought and direction, QR (quick response) codes are a great tool to use in school libraries to enhance access to information. From March through April 2013, Caitlin Ahearn interned at Sanborn Regional High School (SRHS) under the supervision of Pam Harland. As a result of Harland's un-Deweying of the nonfiction collection at SRHS,…

  16. Code Properties from Holographic Geometries

    Science.gov (United States)

    Pastawski, Fernando; Preskill, John

    2017-04-01

    Almheiri, Dong, and Harlow [J. High Energy Phys. 04 (2015) 163., 10.1007/JHEP04(2015)163] proposed a highly illuminating connection between the AdS /CFT holographic correspondence and operator algebra quantum error correction (OAQEC). Here, we explore this connection further. We derive some general results about OAQEC, as well as results that apply specifically to quantum codes that admit a holographic interpretation. We introduce a new quantity called price, which characterizes the support of a protected logical system, and find constraints on the price and the distance for logical subalgebras of quantum codes. We show that holographic codes defined on bulk manifolds with asymptotically negative curvature exhibit uberholography, meaning that a bulk logical algebra can be supported on a boundary region with a fractal structure. We argue that, for holographic codes defined on bulk manifolds with asymptotically flat or positive curvature, the boundary physics must be highly nonlocal, an observation with potential implications for black holes and for quantum gravity in AdS space at distance scales that are small compared to the AdS curvature radius.

  17. Generating Constant Weight Binary Codes

    Science.gov (United States)

    Knight, D.G.

    2008-01-01

    The determination of bounds for A(n, d, w), the maximum possible number of binary vectors of length n, weight w, and pairwise Hamming distance no less than d, is a classic problem in coding theory. Such sets of vectors have many applications. A description is given of how the problem can be used in a first-year undergraduate computational…

  18. Three-dimensional stellarator codes.

    Science.gov (United States)

    Garabedian, P R

    2002-08-06

    Three-dimensional computer codes have been used to develop quasisymmetric stellarators with modular coils that are promising candidates for a magnetic fusion reactor. The mathematics of plasma confinement raises serious questions about the numerical calculations. Convergence studies have been performed to assess the best configurations. Comparisons with recent data from large stellarator experiments serve to validate the theory.

  19. Three-dimensional stellarator codes

    Science.gov (United States)

    Garabedian, P. R.

    2002-01-01

    Three-dimensional computer codes have been used to develop quasisymmetric stellarators with modular coils that are promising candidates for a magnetic fusion reactor. The mathematics of plasma confinement raises serious questions about the numerical calculations. Convergence studies have been performed to assess the best configurations. Comparisons with recent data from large stellarator experiments serve to validate the theory. PMID:12140367

  20. Optimal, Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2002-01-01

    Reliability based code calibration is considered in this paper. It is described how the results of FORM based reliability analysis may be related to the partial safety factors and characteristic values. The code calibration problem is presented in a decision theoretical form and it is discussed how...... of reliability based code calibration of LRFD based design codes....

  1. The Minimum Distance of Graph Codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn

    2011-01-01

    We study codes constructed from graphs where the code symbols are associated with the edges and the symbols connected to a given vertex are restricted to be codewords in a component code. In particular we treat such codes from bipartite expander graphs coming from Euclidean planes and other...

  2. Principled Syntactic Code Completion using Placeholders

    NARCIS (Netherlands)

    De Souza Amorim, L.E.; Erdweg, S.T.; Wachsmuth, G.H.; Visser, Eelco; Varro, D.; Balland, E.; van der Storm, T.

    2016-01-01

    Principled syntactic code completion enables developers to change source code by inserting code templates, thus increasing developer efficiency and supporting language exploration. However, existing code completion systems are ad-hoc and neither complete nor sound. They are not complete and only

  3. TOCAR: a code to interface FOURACES - CARNAVAL

    Energy Technology Data Exchange (ETDEWEB)

    Panini, G.C.; Vaccari, M.

    1981-08-01

    The TOCAR code, written in FORTRAN-IV for IBM-370 computers, is an interface between the output of the FOURACES code and the CARNAVAL binary format for the multigroup neutron cross-sections, scattering matrices and related quantities. Besides the description of the code and the how to use, the report contains the code listing.

  4. Elevating The Status of Code in Ecology.

    Science.gov (United States)

    Mislan, K A S; Heer, Jeffrey M; White, Ethan P

    2016-01-01

    Code is increasingly central to ecological research but often remains unpublished and insufficiently recognized. Making code available allows analyses to be more easily reproduced and can facilitate research by other scientists. We evaluate journal handling of code, discuss barriers to its publication, and suggest approaches for promoting and archiving code. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Channel coding techniques for wireless communications

    CERN Document Server

    Deergha Rao, K

    2015-01-01

    The book discusses modern channel coding techniques for wireless communications such as turbo codes, low-density parity check (LDPC) codes, space–time (ST) coding, RS (or Reed–Solomon) codes and convolutional codes. Many illustrative examples are included in each chapter for easy understanding of the coding techniques. The text is integrated with MATLAB-based programs to enhance the understanding of the subject’s underlying theories. It includes current topics of increasing importance such as turbo codes, LDPC codes, Luby transform (LT) codes, Raptor codes, and ST coding in detail, in addition to the traditional codes such as cyclic codes, BCH (or Bose–Chaudhuri–Hocquenghem) and RS codes and convolutional codes. Multiple-input and multiple-output (MIMO) communications is a multiple antenna technology, which is an effective method for high-speed or high-reliability wireless communications. PC-based MATLAB m-files for the illustrative examples are provided on the book page on Springer.com for free dow...

  6. On the weight distribution of convolutional codes

    NARCIS (Netherlands)

    Gluesing-Luerssen, H

    2005-01-01

    Detailed information about the weight distribution of a convolutional code is given by the adjacency matrix of the state diagram associated with a minimal realization of the code. We will show that this matrix is an invariant of the code. Moreover, it will be proven that codes with the same

  7. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    Science.gov (United States)

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  8. New convolutional code constructions and a class of asymptotically good time-varying codes

    DEFF Research Database (Denmark)

    Justesen, Jørn

    1973-01-01

    We show that the generator polynomials of certain cyclic codes define noncatastrophic fixed convolutional codes whose free distances are lowerbounded by the minimum distances of the cyclic codes. This result is used to construct convolutioual codes with free distance equal to the constraint length...... and to derive convolutional codes with good free distances from the BCH codes. Finally, a class of time-varying codes is constructed for which the free distance increases linearly with the constraint length....

  9. Code Carnivals: resuscitating Code Blue training with accelerated learning.

    Science.gov (United States)

    Keys, Vicky A; Malone, Peggy; Brim, Carla; Schoonover, Heather; Nordstrom, Cindy; Selzler, Melissa

    2009-12-01

    Nurses in the hospital setting must be knowledgeable about resuscitation procedures and proficient in the delivery of care during an emergency. They must be ready to implement their knowledge and skills at a moment's notice. A common dilemma for many nurses is that cardiopulmonary emergencies (Code Blues) are infrequent occurrences. Therefore, how do nurses remain competent and confident in their implementation of emergency skills while having limited exposure to the equipment and minimal experience in emergency situations? A team of nurse educators at a regional medical center in Washington State applied adult learning theory and accelerated learning techniques to develop and present a series of learning activities to enhance the staff's familiarity with emergency equipment and procedures. The series began with a carnival venue that provided hands-on practice and review of emergency skills and was reinforced with subsequent random unannounced code drills led by both educators and charge nurses. Copyright 2009, SLACK Incorporated.

  10. The chromatin regulatory code: Beyond a histone code

    Science.gov (United States)

    Lesne, A.

    2006-03-01

    In this commentary on the contribution by Arndt Benecke in this issue, I discuss why the notion of “chromatin code” introduced and elaborated in this paper is to be preferred to that of “histone code”. Speaking of a code as regards nucleosome conformation and histone tail post-translational modifications only makes sense within the chromatin fiber, where their physico-chemical features can be translated into regulatory programs at the genome level, by means of a complex, multi-level interplay with the fiber architecture and dynamics settled in the course of Evolution. In particular, this chromatin code presumably exploits allosteric transitions of the chromatin fiber. The chromatin structure dependence of its translation suggests two alternative modes of transcription initiation regulation, also proposed in the paper by A. Benecke in this issue for interpreting strikingly bimodal micro-array data.

  11. Code-excited linear predictive coding of multispectral MR images

    Science.gov (United States)

    Hu, Jian-Hong; Wang, Yao; Cahill, Patrick

    1996-02-01

    This paper reports a multispectral code excited linear predictive coding method for the compression of well-registered multispectral MR images. Different linear prediction models and the adaptation schemes have been compared. The method which uses forward adaptive autoregressive (AR) model has proven to achieve a good compromise between performance, complexity and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over non-overlapping square macroblocks. Each macro-block is further divided into several micro-blocks and, the best excitation signals for each microblock are determined through an analysis-by-synthesis procedure. To satisfy the high quality requirement for medical images, the error between the original images and the synthesized ones are further specified using a vector quantizer. The MFCELP method has been applied to 26 sets of clinical MR neuro images (20 slices/set, 3 spectral bands/slice, 256 by 256 pixels/image, 12 bits/pixel). It provides a significant improvement over the discrete cosine transform (DCT) based JPEG method, a wavelet transform based embedded zero-tree wavelet (EZW) coding method, as well as the MSARMA method we developed before.

  12. Genetic coding and gene expression - new Quadruplet genetic coding model

    Science.gov (United States)

    Shankar Singh, Rama

    2012-07-01

    Successful demonstration of human genome project has opened the door not only for developing personalized medicine and cure for genetic diseases, but it may also answer the complex and difficult question of the origin of life. It may lead to making 21st century, a century of Biological Sciences as well. Based on the central dogma of Biology, genetic codons in conjunction with tRNA play a key role in translating the RNA bases forming sequence of amino acids leading to a synthesized protein. This is the most critical step in synthesizing the right protein needed for personalized medicine and curing genetic diseases. So far, only triplet codons involving three bases of RNA, transcribed from DNA bases, have been used. Since this approach has several inconsistencies and limitations, even the promise of personalized medicine has not been realized. The new Quadruplet genetic coding model proposed and developed here involves all four RNA bases which in conjunction with tRNA will synthesize the right protein. The transcription and translation process used will be the same, but the Quadruplet codons will help overcome most of the inconsistencies and limitations of the triplet codes. Details of this new Quadruplet genetic coding model and its subsequent potential applications including relevance to the origin of life will be presented.

  13. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu

    2013-12-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  14. PROSA-1: a probabilistic response-surface analysis code. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, J. K.; Mueller, C.

    1978-06-01

    Techniques for probabilistic response-surface analysis have been developed to obtain the probability distributions of the consequences of postulated nuclear-reactor accidents. The uncertainties of the consequences are caused by the variability of the system and model input parameters used in the accident analysis. Probability distributions are assigned to the input parameters, and parameter values are systematically chosen from these distributions. These input parameters are then used in deterministic consequence analyses performed by mechanistic accident-analysis codes. The results of these deterministic consequence analyses are used to generate the coefficients for analytical functions that approximate the consequences in terms of the selected input parameters. These approximating functions are used to generate the probability distributions of the consequences with random sampling being used to obtain values for the accident parameters from their distributions. A computer code PROSA has been developed for implementing the probabilistic response-surface technique. Special features of the code generate or treat sensitivities, statistical moments of the input and output variables, regionwise response surfaces, correlated input parameters, and conditional distributions. The code can also be used for calculating important distributions of the input parameters. The use of the code is illustrated in conjunction with the fast-running accident-analysis code SACO to provide probability studies of LMFBR hypothetical core-disruptive accidents. However, the methods and the programming are general and not limited to such applications.

  15. Box codes of lengths 48 and 72

    Science.gov (United States)

    Solomon, G.; Jin, Y.

    1993-01-01

    A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.

  16. Amino acid codes in mitochondria as possible clues to primitive codes

    Science.gov (United States)

    Jukes, T. H.

    1981-01-01

    Differences between mitochondrial codes and the universal code indicate that an evolutionary simplification has taken place, rather than a return to a more primitive code. However, these differences make it evident that the universal code is not the only code possible, and therefore earlier codes may have differed markedly from the previous code. The present universal code is probably a 'frozen accident.' The change in CUN codons from leucine to threonine (Neurospora vs. yeast mitochondria) indicates that neutral or near-neutral changes occurred in the corresponding proteins when this code change took place, caused presumably by a mutation in a tRNA gene.

  17. Error suppression via complementary gauge choices in Reed-Muller codes

    Science.gov (United States)

    Chamberland, Christopher; Jochym-O'Connor, Tomas

    2017-09-01

    Concatenation of two quantum error-correcting codes with complementary sets of transversal gates can provide a means toward universal fault-tolerant quantum computation. We first show that it is generally preferable to choose the inner code with the higher pseudo-threshold to achieve lower logical failure rates. We then explore the threshold properties of a wide range of concatenation schemes. Notably, we demonstrate that the concatenation of complementary sets of Reed-Muller codes can increase the code capacity threshold under depolarizing noise when compared to extensions of previously proposed concatenation models. We also analyze the properties of logical errors under circuit-level noise, showing that smaller codes perform better for all sampled physical error rates. Our work provides new insights into the performance of universal concatenated quantum codes for both code capacity and circuit-level noise.

  18. Evaluation of large girth LDPC codes for PMD compensation by turbo equalization.

    Science.gov (United States)

    Minkov, Lyubomir L; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Kueppers, Franko

    2008-08-18

    Large-girth quasi-cyclic LDPC codes have been experimentally evaluated for use in PMD compensation by turbo equalization for a 10 Gb/s NRZ optical transmission system, and observing one sample per bit. Net effective coding gain improvement for girth-10, rate 0.906 code of length 11936 over maximum a posteriori probability (MAP) detector for differential group delay of 125 ps is 6.25 dB at BER of 10(-6). Girth-10 LDPC code of rate 0.8 outperforms the girth-10 code of rate 0.906 by 2.75 dB, and provides the net effective coding gain improvement of 9 dB at the same BER. It is experimentally determined that girth-10 LDPC codes of length around 15000 approach channel capacity limit within 1.25 dB.

  19. The Mystery Behind the Code: Differentiated Instruction with Quick Response Codes in Secondary Physical Education

    Science.gov (United States)

    Adkins, Megan; Wajciechowski, Misti R.; Scantling, Ed

    2013-01-01

    Quick response codes, better known as QR codes, are small barcodes scanned to receive information about a specific topic. This article explains QR code technology and the utility of QR codes in the delivery of physical education instruction. Consideration is given to how QR codes can be used to accommodate learners of varying ability levels as…

  20. Some Families of Asymmetric Quantum MDS Codes Constructed from Constacyclic Codes

    Science.gov (United States)

    Huang, Yuanyuan; Chen, Jianzhang; Feng, Chunhui; Chen, Riqing

    2017-10-01

    Quantum maximal-distance-separable (MDS) codes that satisfy quantum Singleton bound with different lengths have been constructed by some researchers. In this paper, seven families of asymmetric quantum MDS codes are constructed by using constacyclic codes. We weaken the case of Hermitian-dual containing codes that can be applied to construct asymmetric quantum MDS codes with parameters [[n,k,dz/dx

  1. 78 FR 18321 - International Code Council: The Update Process for the International Codes and Standards

    Science.gov (United States)

    2013-03-26

    ... which cycle, go to: http://www.iccsafe.org/cs/codes/Web pages/cycle.aspx. The Code Development Process..., Country Club Hills, Illinois 60478; or download a copy from the ICC Web site noted previously. The... Code. International Property Maintenance Code. International Residential Code. International Swimming...

  2. Predictive coding in Agency Detection

    DEFF Research Database (Denmark)

    Andersen, Marc Malmdorf

    2017-01-01

    , unbeknownst to consciousness, engages in sophisticated Bayesian statistics in an effort to constantly predict the hidden causes of sensory input. My fundamental argument is that most false positives in agency detection can be seen as the result of top-down interference in a Bayesian system generating high...... prior probabilities in the face of unreliable stimuli, and that such a system can better account for the experimental evidence than previous accounts of a dedicated agency detection system. Finally, I argue that adopting predictive coding as a theoretical framework has radical implications......Agency detection is a central concept in the cognitive science of religion (CSR). Experimental studies, however, have so far failed to lend support to some of the most common predictions that follow from current theories on agency detection. In this article, I argue that predictive coding, a highly...

  3. XSTAR Code and Database Status

    Science.gov (United States)

    Kallman, Timothy R.

    2017-08-01

    The XSTAR code is a simulation tool for calculating spectra associated with plasmas which are in a time-steady balance among the microphysical processes. It allows for treatment of plasmas which are exposed to illumination by energetic photons, but also treats processes relevant to collision-dominated plasmas. Processes are treated in a full collisional-radiative formalism which includes convergence to local thermodynamic equilibrium under suitable conditions. It features an interface to the most widely used software for fitting to astrophysical spectra, and has also been compared with laboratory plasma experiments. This poster will describe the recent updates to XSTAR, including atomic data, new features, and some recent applications of the code.

  4. Prospective coding in event representation.

    Science.gov (United States)

    Schütz-Bosbach, Simone; Prinz, Wolfgang

    2007-06-01

    A perceived event such as a visual stimulus in the external world and a to-be-produced event such as an intentional action are subserved by event representations. Event representations do not only contain information about present states but also about past and future states. Here we focus on the role of representing future states in event perception and generation (i.e., prospective coding). Relevant theoretical issues and paradigms are discussed. We suggest that the predictive power of the motor system may be exploited for prospective coding not only in producing but also in perceiving events. Predicting is more advantageous than simply reacting. Perceptual prediction allows us to select appropriate responses ahead of the realization of an (anticipated) event and therefore, it is indispensable to flexibly and timely adapt to new situations and thus, successfully interact with our physical and social environment.

  5. Network Coded Software Defined Networking

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Hansen, Jonas; Roetter, Daniel Enrique Lucani

    2015-01-01

    Software Defined Networking (SDN) and Network Coding (NC) are two key concepts in networking that have garnered a large attention in recent years. On the one hand, SDN's potential to virtualize services in the Internet allows a large flexibility not only for routing data, but also to manage....... This paper advocates for the use of SDN to bring about future Internet and 5G network services by incorporating network coding (NC) functionalities. The inherent flexibility of both SDN and NC provides a fertile ground to envision more efficient, robust, and secure networking designs, that may also...... incorporate content caching and storage, all of which are key challenges of the future Internet and the upcoming 5G networks. This paper proposes some of the keys behind this intersection and supports it with use cases as well as a an implementation that integrated the Kodo library (NC) into OpenFlow (SDN...

  6. Code Development for Collective Effects

    CERN Document Server

    Bruce Li, Kevin Shing; Hegglin, Stefan Eduard; Iadarola, Giovanni; Oeftiger, Adrian; Passarelli, Andrea; Romano, Annalisa; Rumolo, Giovanni; Schenk, Michael; CERN. Geneva. ATS Department

    2016-01-01

    The presentation will cover approaches and strategies of modeling and implementing collective effects in modern simulation codes. We will review some of the general approaches to numerically model collective beam dynamics in circular accelerators. We will then look into modern ways of implementing collective effects with a focus on plainness, modularity and flexibility, using the example of the PyHEADTAIL framework, and highlight some of the advantages and drawbacks emerging from this method. To ameliorate one of the main drawbacks, namely a potential loss of performance compared to the classical fully compiled codes, several options for speed improvements will be mentioned and discussed. Finally some examples and applications will be shown together with future plans and perspectives.

  7. Hello Ruby adventures in coding

    CERN Document Server

    Liukas, Linda

    2015-01-01

    "Code is the 21st century literacy and the need for people to speak the ABCs of Programming is imminent." --Linda Liukas Meet Ruby--a small girl with a huge imagination. In Ruby's world anything is possible if you put your mind to it. When her dad asks her to find five hidden gems Ruby is determined to solve the puzzle with the help of her new friends, including the Wise Snow Leopard, the Friendly Foxes, and the Messy Robots. As Ruby stomps around her world kids will be introduced to the basic concepts behind coding and programming through storytelling. Learn how to break big problems into small problems, repeat tasks, look for patterns, create step-by-step plans, and think outside the box. With hands-on activities included in every chapter, future coders will be thrilled to put their own imaginations to work.

  8. CBP PHASE I CODE INTEGRATION

    Energy Technology Data Exchange (ETDEWEB)

    Smith, F.; Brown, K.; Flach, G.; Sarkar, S.

    2011-09-30

    The goal of the Cementitious Barriers Partnership (CBP) is to develop a reasonable and credible set of software tools to predict the structural, hydraulic, and chemical performance of cement barriers used in nuclear applications over extended time frames (greater than 100 years for operating facilities and greater than 1000 years for waste management). The simulation tools will be used to evaluate and predict the behavior of cementitious barriers used in near surface engineered waste disposal systems including waste forms, containment structures, entombments, and environmental remediation. These cementitious materials are exposed to dynamic environmental conditions that cause changes in material properties via (i) aging, (ii) chloride attack, (iii) sulfate attack, (iv) carbonation, (v) oxidation, and (vi) primary constituent leaching. A set of state-of-the-art software tools has been selected as a starting point to capture these important aging and degradation phenomena. Integration of existing software developed by the CBP partner organizations was determined to be the quickest method of meeting the CBP goal of providing a computational tool that improves the prediction of the long-term behavior of cementitious materials. These partner codes were selected based on their maturity and ability to address the problems outlined above. The GoldSim Monte Carlo simulation program (GTG 2010a, GTG 2010b) was chosen as the code integration platform (Brown & Flach 2009b). GoldSim (current Version 10.5) is a Windows based graphical object-oriented computer program that provides a flexible environment for model development (Brown & Flach 2009b). The linking of GoldSim to external codes has previously been successfully demonstrated (Eary 2007, Mattie et al. 2007). GoldSim is capable of performing deterministic and probabilistic simulations and of modeling radioactive decay and constituent transport. As part of the CBP project, a general Dynamic Link Library (DLL) interface was

  9. Do you write secure code?

    CERN Multimedia

    Computer Security Team

    2011-01-01

    At CERN, we are excellent at producing software, such as complex analysis jobs, sophisticated control programs, extensive monitoring tools, interactive web applications, etc. This software is usually highly functional, and fulfils the needs and requirements as defined by its author. However, due to time constraints or unintentional ignorance, security aspects are often neglected. Subsequently, it was even more embarrassing for the author to find out that his code flawed and was used to break into CERN computers, web pages or to steal data…   Thus, if you have the pleasure or task of producing software applications, take some time before and familiarize yourself with good programming practices. They should not only prevent basic security flaws in your code, but also improve its readability, maintainability and efficiency. Basic rules for good programming, as well as essential books on proper software development, can be found in the section for software developers on our security we...

  10. Introduction of the ASGARD Code

    Science.gov (United States)

    Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv; Fayock, Brian

    2017-01-01

    ASGARD stands for 'Automated Selection and Grouping of events in AIA Regional Data'. The code is a refinement of the event detection method in Ugarte-Urra & Warren (2014). It is intended to automatically detect and group brightenings ('events') in the AIA EUV channels, to record event parameters, and to find related events over multiple channels. Ultimately, the goal is to automatically determine heating and cooling timescales in the corona and to significantly increase statistics in this respect. The code is written in IDL and requires the SolarSoft library. It is parallelized and can run with multiple CPUs. Input files are regions of interest (ROIs) in time series of AIA images from the JSOC cutout service (http://jsoc.stanford.edu/ajax/exportdata.html). The ROIs need to be tracked, co-registered, and limited in time (typically 12 hours).

  11. The Accurate Particle Tracer Code

    CERN Document Server

    Wang, Yulei; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusion energy research, computational mathematics, software engineering, and high-performance computation. The APT code consists of seven main modules, including the I/O module, the initialization module, the particle pusher module, the parallelization module, the field configuration module, the external force-field module, and the extendible module. The I/O module, supported by Lua and Hdf5 projects, provides a user-friendly interface for both numerical simulation and data analysis. A series of new geometric numerical methods...

  12. Verified OS Interface Code Synthesis

    Science.gov (United States)

    2016-12-01

    AFRL-AFOSR-JP-TR-2017-0015 Verified OS Interface Code Synthesis Gerwin Klein NATIONAL ICT AUSTRALIA LIMITED Final Report 02/14/2017 DISTRIBUTION A...ORGANIZATION NAME(S) AND ADDRESS(ES) NATIONAL ICT AUSTRALIA LIMITED L 5 13 GARDEN ST EVELEIGH, 2015 AU 8. PERFORMING ORGANIZATION REPORT NUMBER 9...public release: distribution unlimited. 1 Introduction The central question of this project was how to ensure the correctness of Operating System (OS

  13. HADES, A Radiographic Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Aufderheide, M.B.; Slone, D.M.; Schach von Wittenau, A.E.

    2000-08-18

    We describe features of the HADES radiographic simulation code. We begin with a discussion of why it is useful to simulate transmission radiography. The capabilities of HADES are described, followed by an application of HADES to a dynamic experiment recently performed at the Los Alamos Neutron Science Center. We describe quantitative comparisons between experimental data and HADES simulations using a copper step wedge. We conclude with a short discussion of future work planned for HADES.

  14. Coded continuous wave meteor radar

    Science.gov (United States)

    Chau, J. L.; Vierinen, J.; Pfeffer, N.; Clahsen, M.; Stober, G.

    2016-12-01

    The concept of a coded continuous wave specular meteor radar (SMR) is described. The radar uses a continuously transmitted pseudorandom phase-modulated waveform, which has several advantages compared to conventional pulsed SMRs. The coding avoids range and Doppler aliasing, which are in some cases problematic with pulsed radars. Continuous transmissions maximize pulse compression gain, allowing operation at lower peak power than a pulsed system. With continuous coding, the temporal and spectral resolution are not dependent on the transmit waveform and they can be fairly flexibly changed after performing a measurement. The low signal-to-noise ratio before pulse compression, combined with independent pseudorandom transmit waveforms, allows multiple geographically separated transmitters to be used in the same frequency band simultaneously without significantly interfering with each other. Because the same frequency band can be used by multiple transmitters, the same interferometric receiver antennas can be used to receive multiple transmitters at the same time. The principles of the signal processing are discussed, in addition to discussion of several practical ways to increase computation speed, and how to optimally detect meteor echoes. Measurements from a campaign performed with a coded continuous wave SMR are shown and compared with two standard pulsed SMR measurements. The type of meteor radar described in this paper would be suited for use in a large-scale multi-static network of meteor radar transmitters and receivers. Such a system would be useful for increasing the number of meteor detections to obtain improved meteor radar data products, such as wind fields. This type of a radar would also be useful for over-the-horizon radar, ionosondes, and observations of field-aligned-irregularities.

  15. Clean Code - Why you should care

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    - Martin Fowler Writing code is communication, not solely with the computer that executes it, but also with other developers and with oneself. A developer spends a lot of his working time reading and understanding code that was written by other developers or by himself in the past. The readability of the code plays an important factor for the time to find a bug or add new functionality, which in turn has a big impact on the productivity. Code that is difficult to undestand, hard to maintain and refactor, and offers many spots for bugs to hide is not considered to be "clean code". But what could considered as "clean code" and what are the advantages of a strict application of its guidelines? In this presentation we will take a look on some typical "code smells" and proposed guidelines to improve your coding skills to write cleaner code that is less bug prone and better to maintain.

  16. Penal Code, 24 June 1987.

    Science.gov (United States)

    1987-01-01

    This document contains provisions of Liechtenstein's 1987 Penal Code relating to sterilization, abortion, polygamy, the protection of women and children, crimes related to marriage, and failure to provide support. The Code holds that sexual sterilization carried out at the patient's request is lawful if the patient is at least 25 years old. Performing or inducing an abortion is punishable with imprisonment unless: 1) the abortion is necessary to prevent serious danger to the life or health of the pregnant woman, 2) the pregnant woman was under 14 years old and not married to the man who impregnated her, or 3) the abortion is performed to save the woman's life. The Code also imposes a prison sentence on anyone abducting a woman who is helpless or unable to resist in order to sexually abuse the woman. Bigamy carries a prison term of up to 3 years, and a prison term of up to 1 year is applied in cases where a person deceives another or compels another into marriage. Removing a minor from the control of those authorized to rear said minor can lead to a prison term of up to 1 year, and abandonment of a minor can lead to a prison term of up to 3 years. Violation of the duty of financial support called for by family law can invoke a prison term of up to 6 months.

  17. Special issue on network coding

    Science.gov (United States)

    Monteiro, Francisco A.; Burr, Alister; Chatzigeorgiou, Ioannis; Hollanti, Camilla; Krikidis, Ioannis; Seferoglu, Hulya; Skachek, Vitaly

    2017-12-01

    Future networks are expected to depart from traditional routing schemes in order to embrace network coding (NC)-based schemes. These have created a lot of interest both in academia and industry in recent years. Under the NC paradigm, symbols are transported through the network by combining several information streams originating from the same or different sources. This special issue contains thirteen papers, some dealing with design aspects of NC and related concepts (e.g., fountain codes) and some showcasing the application of NC to new services and technologies, such as data multi-view streaming of video or underwater sensor networks. One can find papers that show how NC turns data transmission more robust to packet losses, faster to decode, and more resilient to network changes, such as dynamic topologies and different user options, and how NC can improve the overall throughput. This issue also includes papers showing that NC principles can be used at different layers of the networks (including the physical layer) and how the same fundamental principles can lead to new distributed storage systems. Some of the papers in this issue have a theoretical nature, including code design, while others describe hardware testbeds and prototypes.

  18. Computer Code for Nanostructure Simulation

    Science.gov (United States)

    Filikhin, Igor; Vlahovic, Branislav

    2009-01-01

    Due to their small size, nanostructures can have stress and thermal gradients that are larger than any macroscopic analogue. These gradients can lead to specific regions that are susceptible to failure via processes such as plastic deformation by dislocation emission, chemical debonding, and interfacial alloying. A program has been developed that rigorously simulates and predicts optoelectronic properties of nanostructures of virtually any geometrical complexity and material composition. It can be used in simulations of energy level structure, wave functions, density of states of spatially configured phonon-coupled electrons, excitons in quantum dots, quantum rings, quantum ring complexes, and more. The code can be used to calculate stress distributions and thermal transport properties for a variety of nanostructures and interfaces, transport and scattering at nanoscale interfaces and surfaces under various stress states, and alloy compositional gradients. The code allows users to perform modeling of charge transport processes through quantum-dot (QD) arrays as functions of inter-dot distance, array order versus disorder, QD orientation, shape, size, and chemical composition for applications in photovoltaics and physical properties of QD-based biochemical sensors. The code can be used to study the hot exciton formation/relation dynamics in arrays of QDs of different shapes and sizes at different temperatures. It also can be used to understand the relation among the deposition parameters and inherent stresses, strain deformation, heat flow, and failure of nanostructures.

  19. Distance sampling methods and applications

    CERN Document Server

    Buckland, S T; Marques, T A; Oedekoven, C S

    2015-01-01

    In this book, the authors cover the basic methods and advances within distance sampling that are most valuable to practitioners and in ecology more broadly. This is the fourth book dedicated to distance sampling. In the decade since the last book published, there have been a number of new developments. The intervening years have also shown which advances are of most use. This self-contained book covers topics from the previous publications, while also including recent developments in method, software and application. Distance sampling refers to a suite of methods, including line and point transect sampling, in which animal density or abundance is estimated from a sample of distances to detected individuals. The book illustrates these methods through case studies; data sets and computer code are supplied to readers through the book’s accompanying website.  Some of the case studies use the software Distance, while others use R code. The book is in three parts.  The first part addresses basic methods, the ...

  20. Development of probabilistic internal dosimetry computer code

    Science.gov (United States)

    Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki

    2017-02-01

    Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values ( e.g. the 2.5th, 5th, median, 95th, and 97.5th percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various situations. In cases of

  1. Development of Probabilistic Internal Dosimetry Computer Code

    Energy Technology Data Exchange (ETDEWEB)

    Noh, Siwan [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kwon, Tae-Eun [Korea Institute of Radiological and Medical Sciences, Seoul (Korea, Republic of); Lee, Jai-Ki [Korean Association for Radiation Protection, Seoul (Korea, Republic of)

    2017-02-15

    Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values (e.g. the 2.5{sup th}, 5{sup th}, median, 95{sup th}, and 97.5{sup th} percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various

  2. Impacts of Model Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Athalye, Rahul A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sivaraman, Deepak [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elliott, Douglas B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Bing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bartlett, Rosemarie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-10-31

    The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO2 emissions at the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.

  3. Coded source imaging simulation with visible light

    Energy Technology Data Exchange (ETDEWEB)

    Wang Sheng [State Key Laboratory of Nuclear Physics and Technology and School of Physics, IHIP, Peking University, Yiheyuan Lu 5, Beijing 100871 (China); Zou Yubin, E-mail: zouyubin@pku.edu.cn [State Key Laboratory of Nuclear Physics and Technology and School of Physics, IHIP, Peking University, Yiheyuan Lu 5, Beijing 100871 (China); Zhang Xueshuang; Lu Yuanrong; Guo Zhiyu [State Key Laboratory of Nuclear Physics and Technology and School of Physics, IHIP, Peking University, Yiheyuan Lu 5, Beijing 100871 (China)

    2011-09-21

    A coded source could increase the neutron flux with high L/D ratio. It may benefit a neutron imaging system with low yield neutron source. Visible light CSI experiments were carried out to test the physical design and reconstruction algorithm. We used a non-mosaic Modified Uniformly Redundant Array (MURA) mask to project the shadow of black/white samples on a screen. A cooled-CCD camera was used to record the image on the screen. Different mask sizes and amplification factors were tested. The correlation, Wiener filter deconvolution and Richardson-Lucy maximum likelihood iteration algorithm were employed to reconstruct the object imaging from the original projection. The results show that CSI can benefit the low flux neutron imaging with high background noise.

  4. Energy information data base: report number codes

    Energy Technology Data Exchange (ETDEWEB)

    None

    1979-09-01

    Each report processed by the US DOE Technical Information Center is identified by a unique report number consisting of a code plus a sequential number. In most cases, the code identifies the originating installation. In some cases, it identifies a specific program or a type of publication. Listed in this publication are all codes that have been used by DOE in cataloging reports. This compilation consists of two parts. Part I is an alphabetical listing of report codes identified with the issuing installations that have used the codes. Part II is an alphabetical listing of installations identified with codes each has used. (RWR)

  5. Tandem Mirror Reactor Systems Code (Version I)

    Energy Technology Data Exchange (ETDEWEB)

    Reid, R.L.; Finn, P.A.; Gohar, M.Y.; Barrett, R.J.; Gorker, G.E.; Spampinaton, P.T.; Bulmer, R.H.; Dorn, D.W.; Perkins, L.J.; Ghose, S.

    1985-09-01

    A computer code was developed to model a Tandem Mirror Reactor. Ths is the first Tandem Mirror Reactor model to couple, in detail, the highly linked physics, magnetics, and neutronic analysis into a single code. This report describes the code architecture, provides a summary description of the modules comprising the code, and includes an example execution of the Tandem Mirror Reactor Systems Code. Results from this code for two sensitivity studies are also included. These studies are: (1) to determine the impact of center cell plasma radius, length, and ion temperature on reactor cost and performance at constant fusion power; and (2) to determine the impact of reactor power level on cost.

  6. A Mobile Application Prototype using Network Coding

    DEFF Research Database (Denmark)

    Pedersen, Morten Videbæk; Heide, Janus; Fitzek, Frank

    2010-01-01

    This paper looks into implementation details of network coding for a mobile application running on commercial mobile phones. We describe the necessary coding operations and algorithms that implements them. The coding algorithms forms the basis for a implementation in C++ and Symbian C++. We report...... on practical measurement results of coding throughput and energy consumption for a single-source multiple-sinks network, with and without recoding at the sinks. These results confirm that network coding is practical even on computationally weak platforms, and that network coding potentially can be used...

  7. Toric Codes, Multiplicative Structure and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    2017-01-01

    and aligns with decoding by error correcting pairs. We have used the multiplicative structure on toric codes to construct linear secret sharing schemes with \\emph{strong multiplication} via Massey's construction generalizing the Shamir Linear secret sharing shemes constructed from Reed-Solomon codes. We have...... constructed quantum error correcting codes from toric surfaces by the Calderbank-Shor-Steane method.......Long linear codes constructed from toric varieties over finite fields, their multiplicative structure and decoding. The main theme is the inherent multiplicative structure on toric codes. The multiplicative structure allows for \\emph{decoding}, resembling the decoding of Reed-Solomon codes...

  8. Simulation of Water Chemistry using and Geochemistry Code, PHREEQE

    Energy Technology Data Exchange (ETDEWEB)

    Chi, J.H. [Korea Electric Power Research Institute, Taejeon (Korea)

    2001-07-01

    This report introduces principles and procedures of simulation for water chemistry using a geochemistry code, PHREEQE. As and example of the application of this code, we described the simulation procedure for titration of an aquatic sample with strong acid to investigate the state of Carbonates in aquatic solution. Major contents of this report are as follows; Concepts and principles of PHREEQE, Kinds of chemical reactions which may be properly simulated by PHREEQE, The definition and meaning of each input data, An example of simulation using PHREEQE. (author). 2 figs., 1 tab.

  9. An implicit Smooth Particle Hydrodynamic code

    Energy Technology Data Exchange (ETDEWEB)

    Knapp, Charles E. [Univ. of New Mexico, Albuquerque, NM (United States)

    2000-05-01

    An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.

  10. On the extention of propelinear structures of Nordstrom-Robinson code to Hamming code

    OpenAIRE

    Mogilnykh, I. Yu.

    2015-01-01

    A code is called propelinear if its automorphism group contains a subgroup that acts regularly on its codewords, which is called a propelinear structure on the code. In the paper a classification of the propelinear structures on the Nordstrom-Robinson code is obtained and the question of extension of these structures to propelinear structures of the Hamming code, that contains the Nordstrom-Robinson code. The result partially relies on a representation of all partitions of the Hamming code in...

  11. Some optimal partial-unit-memory codes. [time-invariant binary convolutional codes

    Science.gov (United States)

    Lauer, G. S.

    1979-01-01

    A class of time-invariant binary convolutional codes is defined, called partial-unit-memory codes. These codes are optimal in the sense of having maximum free distance for given values of R, k (the number of encoder inputs), and mu (the number of encoder memory cells). Optimal codes are given for rates R = 1/4, 1/3, 1/2, and 2/3, with mu not greater than 4 and k not greater than mu + 3, whenever such a code is better than previously known codes. An infinite class of optimal partial-unit-memory codes is also constructed based on equidistant block codes.

  12. P-code versus C/A-code GPS for range tracking applications

    Science.gov (United States)

    Hoefener, Carl E.; van Wechel, Bob

    This article compares the use of P-code and C/A-code GPS receivers on test and training ranges. The requirements on many ranges for operation under conditions of jamming preclude the use of C/A-code receivers because of their relatively low jamming immunity as compared with P-code receivers. Also, C/A-code receivers present some problems when used with pseudolites on ranges. The cost of P-code receivers is customarily much higher than that of C/A-code receivers. However, most of this difference is caused by factors other than P-code, particularly the parts screening specifications applied to military programs.

  13. Self-orthogonal codes with dual distance three and quantum codes with distance three over

    Science.gov (United States)

    Liang, Fangchi

    2013-12-01

    Self-orthogonal codes with dual distance three and quantum codes with distance three constructed from self-orthogonal codes over are discussed in this paper. Firstly, for given code length , a self-orthogonal code with minimal dimension and dual distance three is constructed. Secondly, for each , two nested self-orthogonal codes with dual distance two and three are constructed, and consequently quantum code of length and distance three is constructed via Steane construction. All of these quantum codes constructed via Steane construction are optimal or near optimal according to the quantum Hamming bound.

  14. On the Role of Functional Categories in Code-Switching: The Igbo ...

    African Journals Online (AJOL)

    Adopting the Functional Head Selection Constraint of the Matrix Language Frame (MLF) Model proposed by Myers- Scotton (1993, 1995) and with samples of code-switched expression collected from Igbo English bilinguals, the study examines the pattern of code-switching using Igbo and English as the focal point. Igbo is ...

  15. WPC's Short Range Forecast Coded Bulletin

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Short Range Forecast Coded Bulletin. The Short Range Forecast Coded Bulletin describes the expected locations of high and low pressure centers, surface frontal...

  16. The FLUKA Code: Description And Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Battistoni, Giuseppe; Muraro, S.; Sala, Paola R.; /INFN, Milan; Cerutti, Fabio; Ferrari, A.; Roesler, Stefan; /CERN; Fasso, A.; /SLAC; Ranft, J.; /Siegen U.

    2007-09-18

    The physics model implemented inside the FLUKA code are briefly described, with emphasis on hadronic interactions. Examples of the capabilities of the code are presented including basic (thin target) and complex benchmarks.

  17. Coding Theory, Cryptography and Related Areas

    DEFF Research Database (Denmark)

    Buchmann, Johannes; Stichtenoth, Henning; Tapia-Recillas, Horacio

    Proceedings of anInternational Conference on Coding Theory, Cryptography and Related Areas, held in Guanajuato, Mexico. in april 1998......Proceedings of anInternational Conference on Coding Theory, Cryptography and Related Areas, held in Guanajuato, Mexico. in april 1998...

  18. Content layer progressive coding of digital maps

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Jensen, Ole Riis

    2000-01-01

    A new lossless context based method is presented for content progressive coding of limited bits/pixel images, such as maps, company logos, etc., common on the WWW. Progressive encoding is achieved by separating the image into content layers based on other predefined information. Information from...... already coded layers are used when coding subsequent layers. This approach is combined with efficient template based context bi-level coding, context collapsing methods for multi-level images and arithmetic coding. Relative pixel patterns are used to collapse contexts. The number of contexts are analyzed....... The new methods outperform existing coding schemes coding digital maps and in addition provide progressive coding. Compared to the state-of-the-art PWC coder, the compressed size is reduced to 60-70% on our layered test images....

  19. Coding for Single-Line Transmission

    Science.gov (United States)

    Madison, L. G.

    1983-01-01

    Digital transmission code combines data and clock signals into single waveform. MADCODE needs four standard integrated circuits in generator and converter plus five small discrete components. MADCODE allows simple coding and decoding for transmission of digital signals over single line.

  20. Performance of Turbo Code with Different Parameters

    Directory of Open Access Journals (Sweden)

    Samir Jasim

    2017-08-01

    Full Text Available Turbo codes are one of error correction coding where the errors which may be added into the transmission data through a communication channel can be detected and corrected, these codes provided for long codewords with decoding complexity. Turbo code is one of the concatenated codes connected in serial or in parallel for transmission data with great throughput and achieve near Shannon limit. This paper presents the performance of turbo code with different parameters such as (number of iteration, type of decoding techniques, length of code, rate, generator polynomial and type of channel get the Bit Error Rate (BER for each case, then compare the results to specify the parameters which give the optimum performance of this code. The system is simulated by using MATLAB R2016b program.

  1. Robust Reed Solomon Coded MPSK Modulation

    Directory of Open Access Journals (Sweden)

    Emir M. Husni

    2014-10-01

    Full Text Available In this paper, construction of partitioned Reed Solomon coded modulation (RSCM, which is robust for the additive white Gaussian noise channel and a Rayleigh fading channel, is investigated. By matching configuration of component codes with the channel characteristics, it is shown that this system is robust for the Gaussian and a Rayleigh fading channel. This approach is compared with non-partitioned RSCM, a Reed Solomon code combined with an MPSK signal set using Gray mapping; and block coded MPSK modulation using binary codes, Reed Muller codes. All codes use hard decision decoding algorithm. Simulation results for these schemes show that RSCM based on set partitioning performs better than those that are not based on set partitioning and Reed Muller Coded Modulation across a wide range of conditions. The novel idea here is that in the receiver, we use a rotated 2^(m+1-PSK detector if the transmitter uses a 2^m-PSK modulator.

  2. Functions of code switching in multilingual classrooms

    National Research Council Canada - National Science Library

    Ondene van Dulm; Suzanne Rose

    2011-01-01

    .... Within each of these types of code switching, a number of specific functions of code switching in the classrooms observed are identified, such as expansion, clarification, and identity marking...

  3. A Unique Perspective on Data Coding and Decoding

    Directory of Open Access Journals (Sweden)

    Wen-Yan Wang

    2010-12-01

    Full Text Available The concept of a loss-less data compression coding method is proposed, and a detailed description of each of its steps follows. Using the Calgary Corpus and Wikipedia data as the experimental samples and compared with existing algorithms, like PAQ or PPMstr, the new coding method could not only compress the source data, but also further re-compress the data produced by the other compression algorithms. The final files are smaller, and by comparison with the original compression ratio, at least 1% redundancy could be eliminated. The new method is simple and easy to realize. Its theoretical foundation is currently under study. The corresponding Matlab source code is provided in  the Appendix.

  4. Automatic differentiation of codes in nuclear engineering applications.

    Energy Technology Data Exchange (ETDEWEB)

    Alexe, M.; Roderick, O.; Utke, J.; Anitescu, M.; Hovland, P.; Fanning, T.; Virginia Polytechnic Inst. and State Univ.; Unv. of Chicago

    2009-12-01

    We discuss our experience in applying automatic differentiation (AD) to calculations in nuclear reactor applications. The document is intended as a guideline on how to apply AD to Fortran codes with significant legacy components; it is also a part of a larger research effort in uncertainty quantification using sampling methods augmented with derivative information. We provide a brief theoretical description of the concept of AD, explain the necessary changes in the code structure, and remark on possible ways to deal with non-differentiability. Numerical experiments were carried out where the derivative of a functional subset of the SAS4A/SASSYS code was computed in forward mode with several AD tools. The results are in good agreement with both the real and complex finite-difference approximations of the derivative.

  5. Comparison of two cochlear implant coding strategies on speech perception.

    Science.gov (United States)

    Dillon, Margaret T; Buss, Emily; King, English R; Deres, Ellen J; Obarowski, Sarah N; Anderson, Meredith L; Adunka, Marcia C

    2016-11-01

    Assess whether differences in speech perception are observed after exclusive listening experience with high-definition continuous interleaved sampling (HDCIS) versus fine structure processing (FSP) coding strategies. Subjects were randomly assigned at initial activation of the external speech processor to receive the HDCIS or FSP coding strategy. Frequency filter assignments were consistent across subjects. The speech perception test battery included CNC words in quiet, HINT sentences in quiet and steady noise (+10 dB SNR), AzBio sentences in quiet and a 10-talker babble (+10 dB SNR), and BKB-SIN. Assessment intervals included 1, 3, and 6 months post-activation. Data from 22 subjects (11 with HDCIS and 11 with FSP) were assessed over time. Speech perception performance was not significantly different between groups. Speech perception performance was not significantly different after 6 months of listening experience with the HDCIS or FSP coding strategy.

  6. Folding, Tiling, and Multidimensional Coding

    OpenAIRE

    Etzion, Tuvi

    2009-01-01

    Folding a sequence $S$ into a multidimensional box is a method that is used to construct multidimensional codes. The well known operation of folding is generalized in a way that the sequence $S$ can be folded into various shapes. The new definition of folding is based on lattice tiling and a direction in the $D$-dimensional grid. There are potentially $\\frac{3^D-1}{2}$ different folding operations. Necessary and sufficient conditions that a lattice combined with a direction define a folding a...

  7. Wavefront coding for visual optics

    Science.gov (United States)

    Acosta, E.; Arines, J.; Almaguer, C.

    2017-08-01

    Wavefront coding (WFC) enables the depth of field of incoherent optical systems to be extended. This method involves a cubic-phase plate in the system yielding a blurred image nearly invariant to defocus. In visual optics there is a big interest in improving solutions for two different problems: Presbyopia correction and high resolution retinal images with low cost devices. In this work we will show how the use of cubic phases in contact lenses can be an alternative to multifocal lenses and how WFC technique can be applied to record high resolution retinal images reducing the complexity of the actual systems

  8. Language Recognition via Sparse Coding

    Science.gov (United States)

    2016-09-08

    combination of basis vectors in a dictionary (also learned). Nonzeros in the computed sparse code quantify the presence of specific basis vectors. By...for x using Dt−1a 6: update At := At−1 + yy> and Bt := Bt −1 + xy> 7: update by block-coordinate descent Dta := argminD′ 1 t [ 1 2 Tr(D′>D′At)− Tr(D... Bt )] 8: end 9: return: Dta U"erance(1(from(language(li(( U"erance(2(from(language(li# ( (((((((…( ( U"erance(m(from(language(li( U"erance(1(from

  9. Theory of Coding Informational Simulation.

    Science.gov (United States)

    1981-04-06

    Possibility of Applying the Code Juice for the Construction of Combinatory Switches, by V. G. Yevstigneyev. 164 Organization of the Structure of the...Ip; p, - the basis/tase of range P and i=1, n.’-.I With the multiplication cf fractions A&/P and A2/P dcps ap:.*.a: fraction A/P2, where A- Alo &. DOC...a. a r. A ML I~mma~pa u"ai~TsXA. M., SMIIDO, 1966 *O I> DOC =81024105 PAGE A Page 97. The possibility of applying the ccde juice for the construction

  10. Network Coding Over The 232

    DEFF Research Database (Denmark)

    Pedersen, Morten Videbæk; Heide, Janus; Vingelmann, Peter

    2013-01-01

    from a benchmark application written in C++. These results are finally compared to different binary and binary extension field implementations. The results show that the prime field implementation offers a large field size while maintaining a very good performance. We believe that using prime fields...... investigate the use of prime fields with a field size of 232 − 5, as this allows implementations which combines high field sizes and low complexity. First we introduce the algorithms needed to apply prime field arithmetics to arbitrary binary data. After this we present the initial throughput measurements...... will be useful in many network coding applications where large field sizes are required....

  11. Adaptive Hybrid Picture Coding. Volume 1.

    Science.gov (United States)

    1985-02-01

    shifted two pixels in the x direction between time frames. Figure 2.1 is a picture of the actual test image . FIGURE 2.1 Radially Decaying Cosine...very limited class of simple images , but this method has many drawbacks that will limit its overall usefulness. First, the output is very sensitive to... images that code the most common gray levels into short code words and rare gray levels into long code words . Hybrid coding is a combination of both

  12. Concentration of Magnetization for Linear Block Codes

    OpenAIRE

    Korada, Satish Babu; Kudekar, Shrinivas; Macris, Nicolas

    2008-01-01

    We consider communication over the binary erasure and the binary additive white gaussian noise channels using fixed linear block codes and also appropriate ensembles of such codes. We show concentration of the magnetization over the channel realizations and also over the code ensembles. The result has various implications. For the binary erasure channel, the result implies the concentration of the fraction of bits in error over the randomness in both noise and code realization, and that of th...

  13. Functions of code switching in multilingual classrooms

    OpenAIRE

    Ondene van Dulm; Suzanne Rose

    2011-01-01

    The research reported in this paper focuses on the functions of code switching between English and Afrikaans in the classroom interactions of a secondary school in the Western Cape. The data comprising audio recordings of classroom interactions are analysed within the framework of Myers-Scotton’s (1993a) Markedness Model, according to which there are four types of code switching, namely marked, unmarked, sequential unmarked, and exploratory code switching. Within each of these types of code s...

  14. Codes of Ethics and Teachers' Professional Autonomy

    Science.gov (United States)

    Schwimmer, Marina; Maxwell, Bruce

    2017-01-01

    This article considers the value of adopting a code of professional ethics for teachers. After having underlined how a code of ethics stands to benefits a community of educators--namely, by providing a mechanism for regulating autonomy and promoting a shared professional ethic--the article examines the principal arguments against codes of ethics.…

  15. Code quality issues in student programs

    NARCIS (Netherlands)

    Keuning, H.W.|info:eu-repo/dai/nl/411260820; Heeren, B.J.|info:eu-repo/dai/nl/304840130; Jeuring, J.T.|info:eu-repo/dai/nl/075189771

    2017-01-01

    Because low quality code can cause serious problems in software systems, students learning to program should pay attention to code quality early. Although many studies have investigated mistakes that students make during programming, we do not know much about the quality of their code. This study

  16. Families of twisted tensor product codes

    OpenAIRE

    Giuzzi, Luca; Pepe, Valentina

    2011-01-01

    Using geometric properties of the variety $\\cV_{r,t}$, the image under the Grassmannian map of a Desarguesian $(t-1)$-spread of $\\PG(rt-1,q)$, we introduce error correcting codes related to the twisted tensor product construction, producing several families of constacyclic codes. We exactly determine the parameters of these codes and characterise the words of minimum weight.

  17. On Vertex Identifying Codes For Infinite Lattices

    OpenAIRE

    Stanton, Brendon

    2011-01-01

    PhD Thesis--A compilation of the papers: "Lower Bounds for Identifying Codes in Some Infinite Grids", "Improved Bounds for r-identifying Codes of the Hex Grid", and "Vertex Identifying Codes for the n-dimensional Lattics" along with some other results

  18. Network Coding Protocols for Smart Grid Communications

    DEFF Research Database (Denmark)

    Prior, Rui; Roetter, Daniel Enrique Lucani; Phulpin, Yannick

    2014-01-01

    We propose a robust network coding protocol for enhancing the reliability and speed of data gathering in smart grids. At the heart of our protocol lies the idea of tunable sparse network coding, which adopts the transmission of sparsely coded packets at the beginning of the transmission process b...

  19. Source code retrieval using conceptual similarity

    NARCIS (Netherlands)

    Mishne, G.A.; de Rijke, M.

    2004-01-01

    We propose a method for retrieving segments of source code from a large repository. The method is based on conceptual modeling of the code, combining information extracted from the structure of the code and standard informationdistance measures. Our results show an improvement over traditional

  20. Dynamic Reverse Code Generation for Backward Execution

    DEFF Research Database (Denmark)

    Lee, Jooyong

    2007-01-01

    . In this paper, we present a method to generate reverse code, so that backtracking can be performed by executing reverse code. The novelty of our work is that we generate reverse code on-the-fly, while running a debugger, which makes it possible to apply the method even to debugging multi-threaded programs....

  1. 1 CFR 22.6 - Code designation.

    Science.gov (United States)

    2010-01-01

    ... 1 General Provisions 1 2010-01-01 2010-01-01 false Code designation. 22.6 Section 22.6 General... DOCUMENTS PREPARATION OF NOTICES AND PROPOSED RULES Proposed Rules § 22.6 Code designation. The area of the Code of Federal Regulations directly affected by a proposed regulatory action shall be identified by...

  2. Improved decoding for a concatenated coding system

    DEFF Research Database (Denmark)

    Paaske, Erik

    1990-01-01

    The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new...

  3. Secrecy Gain: a Wiretap Lattice Code Design

    OpenAIRE

    Belfiore, Jean-Claude; Oggier, Frédérique

    2010-01-01

    We propose the notion of secrecy gain as a code design criterion for wiretap lattice codes to be used over an additive white Gaussian noise channel. Our analysis relies on the error probabilites of both the legitimate user and the eavesdropper. We focus on geometrical properties of lattices, described by their theta series, to characterize good wiretap codes.

  4. A class of Sudan-decodable codes

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Refslund

    2000-01-01

    In this article, Sudan's algorithm is modified into an efficient method to list-decode a class of codes which can be seen as a generalization of Reed-Solomon codes. The algorithm is specialized into a very efficient method for unique decoding. The code construction can be generalized based...

  5. Code quality Issues in Student Programs

    NARCIS (Netherlands)

    Keuning, Hieke|info:eu-repo/dai/nl/411260820; Heeren, Bastiaan|info:eu-repo/dai/nl/304840130; Jeuring, Johan|info:eu-repo/dai/nl/075189771

    2017-01-01

    Because low quality code can cause serious problems in software systems, students learning to program should pay attention to code quality early. Although many studies have investigated mistakes that students make during programming, we do not know much about the quality of their code. This study

  6. UEP LT Codes with Intermediate Feedback

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Popovski, Petar; Østergaard, Jan

    2013-01-01

    We analyze a class of rateless codes, called Luby transform (LT) codes with unequal error protection (UEP). We show that while these codes successfully provide UEP, there is a significant price in terms of redundancy in the lower prioritized segments. We propose a modification with a single inter...

  7. A Survey of Linear Network Coding and Network Error Correction Code Constructions and Algorithms

    Directory of Open Access Journals (Sweden)

    Michele Sanna

    2011-01-01

    Full Text Available Network coding was introduced by Ahlswede et al. in a pioneering work in 2000. This paradigm encompasses coding and retransmission of messages at the intermediate nodes of the network. In contrast with traditional store-and-forward networking, network coding increases the throughput and the robustness of the transmission. Linear network coding is a practical implementation of this new paradigm covered by several research works that include rate characterization, error-protection coding, and construction of codes. Especially determining the coding characteristics has its importance in providing the premise for an efficient transmission. In this paper, we review the recent breakthroughs in linear network coding for acyclic networks with a survey of code constructions literature. Deterministic construction algorithms and randomized procedures are presented for traditional network coding and for network-control network coding.

  8. The Continual Intercomparison of Radiation Codes: Results from Phase I

    Science.gov (United States)

    Oreopoulos, Lazaros; Mlawer, Eli; Delamere, Jennifer; Shippert, Timothy; Cole, Jason; Iacono, Michael; Jin, Zhonghai; Li, Jiangnan; Manners, James; Raisanen, Petri; hide

    2011-01-01

    The computer codes that calculate the energy budget of solar and thermal radiation in Global Climate Models (GCMs), our most advanced tools for predicting climate change, have to be computationally efficient in order to not impose undue computational burden to climate simulations. By using approximations to gain execution speed, these codes sacrifice accuracy compared to more accurate, but also much slower, alternatives. International efforts to evaluate the approximate schemes have taken place in the past, but they have suffered from the drawback that the accurate standards were not validated themselves for performance. The manuscript summarizes the main results of the first phase of an effort called "Continual Intercomparison of Radiation Codes" (CIRC) where the cases chosen to evaluate the approximate models are based on observations and where we have ensured that the accurate models perform well when compared to solar and thermal radiation measurements. The effort is endorsed by international organizations such as the GEWEX Radiation Panel and the International Radiation Commission and has a dedicated website (i.e., http://circ.gsfc.nasa.gov) where interested scientists can freely download data and obtain more information about the effort's modus operandi and objectives. In a paper published in the March 2010 issue of the Bulletin of the American Meteorological Society only a brief overview of CIRC was provided with some sample results. In this paper the analysis of submissions of 11 solar and 13 thermal infrared codes relative to accurate reference calculations obtained by so-called "line-by-line" radiation codes is much more detailed. We demonstrate that, while performance of the approximate codes continues to improve, significant issues still remain to be addressed for satisfactory performance within GCMs. We hope that by identifying and quantifying shortcomings, the paper will help establish performance standards to objectively assess radiation code quality

  9. The Black Hole Accretion Code

    CERN Document Server

    Porth, Oliver; Mizuno, Yosuke; Younsi, Ziri; Rezzolla, Luciano; Moscibrodzka, Monika; Falcke, Heino; Kramer, Michael

    2016-01-01

    We present the black hole accretion code (BHAC), a new multidimensional general-relativistic magnetohydrodynamics module for the MPI-AMRVAC framework. BHAC has been designed to solve the equations of ideal general-relativistic magnetohydrodynamics in arbitrary spacetimes and exploits adaptive mesh refinement techniques with an efficient block-based approach. Several spacetimes have already been implemented and tested. We demonstrate the validity of BHAC by means of various one-, two-, and three-dimensional test problems, as well as through a close comparison with the HARM3D code in the case of a torus accreting onto a black hole. The convergence of a turbulent accretion scenario is investigated with several diagnostics and we find accretion rates and horizon-penetrating fluxes to be convergent to within a few percent when the problem is run in three dimensions. Our analysis also involves the study of the corresponding thermal synchrotron emission, which is performed by means of a new general-relativistic radi...

  10. Technology cool women who code

    CERN Document Server

    Diehn, Andi

    2015-01-01

    Do you listen to music with an MP3 player or read books on a tablet? Do you play multiplayer video games with people on the other side of the world? Do you have a robot cleaning your kitchen? Maybe not yet, but someday! In Technology: Cool Women Who Code, kids in grades four through six learn about the thrilling effort that goes into researching, inventing, programming, and producing the technology we use today, from iPods to mechanical limbs. Young readers discover exactly what technology is, how it evolved, and where the future may lead. They also meet three women who have contributed to the field in critical ways, including Grace Hopper and Shaundra Bryant Daily. Technology: Cool Women Who Code combines high-interest content with links to online primary sources and essential questions that further expand kids' knowledge and understanding of a topic they come in contact with every day. Compelling portraits of women who have excelled in meeting the challenges of their field keep kids interested and infused w...

  11. A Message Without a Code?

    Directory of Open Access Journals (Sweden)

    Tom Conley

    1981-01-01

    Full Text Available The photographic paradox is said to be that of a message without a code, a communication lacking a relay or gap essential to the process of communication. Tracing the recurrence of Barthes's definition in the essays included in Image/Music/Text and in La Chambre claire , this paper argues that Barthes's definition is platonic in its will to dematerialize the troubling — graphic — immediacy of the photograph. He writes of the image in order to flee its signature. As a function of media, his categories are written in order to be insufficient and inadequate; to maintain an ineluctable difference between language heard and letters seen; to protect an idiom of loss which the photograph disallows. The article studies the strategies of his definition in «The Photographic Paradox» as instrument of abstraction, opposes the notion of code, in an aural sense, to audio-visual markers of closed relay in advertising, and critiques the layout and order of La Chambre claire in respect to Barthes's ideology of absence.

  12. The FORTRAN NALAP code adapted to a microcomputer compiler

    Energy Technology Data Exchange (ETDEWEB)

    Lobo, Paulo David de Castro; Borges, Eduardo Madeira; Braz Filho, Francisco Antonio; Guimaraes, Lamartine Nogueira Frutuoso, E-mail: plobo.a@uol.com.b, E-mail: eduardo@ieav.cta.b, E-mail: fbraz@ieav.cta.b, E-mail: guimarae@ieav.cta.b [Instituto de Estudos Avancados (IEAv/CTA), Sao Jose dos Campos, SP (Brazil)

    2010-07-01

    The Nuclear Energy Division of the Institute for Advanced Studies (IEAv) is conducting the TERRA project (TEcnologia de Reatores Rapidos Avancados), Technology for Advanced Fast Reactors project, aimed at a space reactor application. In this work, to attend the TERRA project, the NALAP code adapted to a microcomputer compiler called Compaq Visual Fortran (Version 6.6) is presented. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic responses for sodium cooled fast reactors. The strategy to run the code in a PC was divided in some steps mainly to remove unnecessary routines, to eliminate old statements, to introduce new ones and also to include extension precision mode. The source program was able to solve three sample cases under conditions of protected transients suggested in literature: the normal reactor shutdown, with a delay of 200 ms to start the control rod movement and a delay of 500 ms to stop the pumps; reactor scram after transient of loss of flow; and transients protected from overpower. Comparisons were made with results from the time when the NALAP code was acquired by the IEAv, back in the 80's. All the responses for these three simulations reproduced the calculations performed with the CDC compiler in 1985. Further modifications will include the usage of gas as coolant for the nuclear reactor to allow a Closed Brayton Cycle Loop - CBCL - to be used as a heat/electric converter. (author)

  13. HipGISAXS: A Massively Parallel Code for GISAXS Simulation

    Science.gov (United States)

    Chourou, Slim; Sarje, Abhinav; Li, Xiaoye; Chan, Elaine; Hexemer, Alexander; Hipgisaxs Team

    2013-03-01

    Grazing Incidence Small-Angle Scattering (GISAXS) is a valuable experimental technique in probing nanostructures of relevance to polymer science. New high-performance computing algorithms, codes, and software tools have been implemented to analyze GISAXS images generated at synchrotron light sources. We have developed flexible massively parallel GISAXS simulation software ``HipGISAXS'' based on the Distorted Wave Born Approximation (DWBA). The software computes the diffraction pattern for any given superposition of custom shapes or morphologies in a user-defined region of the reciprocal space for all possible grazing incidence angles and sample rotations. This flexibility allows a straightforward study of a wide variety of possible polymer topologies and assemblies whether embedded in a thin film or a multilayered structure. Hence, this code enables guided investigations of the morphological and dynamical properties of relevance in various applications. The current parallel code is capable of computing GISAXS images for highly complex structures and with high resolutions and attaining speedups of 200x on a single-node GPU compared to the sequential code. Moreover, the multi-GPU (CPU) code achieved additional 900x (4000x) speedup on 930 GPU (6000 CPU) nodes. This work was supported by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.

  14. Development of a subchannel analysis code MATRA (Ver. {alpha})

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Y. J.; Hwang, D. H

    1998-04-01

    A subchannel analysis code MATRA-{alpha}, an interim version of MATRA, has been developed to be run on an IBM PC or HP WS based on the existing CDC CYBER mainframe version of COBRA-IV-I. This MATRA code is a thermal-hydraulic analysis code based on the subchannel approach for calculating the enthalpy and flow distribution in fuel assemblies and reactor cores for both steady-state and transient conditions. MATRA-{alpha} has been provided with an improved structure, various functions, and models to give the more convenient user environment and to increase the code accuracy, various functions, and models to give the more convenient user environment and to increase the code accuracy. Among them, the pressure drop model has been improved to be applied to non-square-lattice rod arrays, and the lateral transport models between adjacent subchannels have been improved to increase the accuracy in predicting two-phase flow phenomena. Also included in this report are the detailed instructions for input data preparation and for auxiliary pre-processors to serve as a guide to those who want to use MATRA-{alpha}. In addition, we compared the predictions of MATRA-{alpha} with the experimental data on the flow and enthalpy distribution in three sample rod-bundle cases to evaluate the performance of MATRA-{alpha}. All the results revealed that the prediction of MATRA-{alpha} were better than those of COBRA-IV-I. (author). 16 refs., 1 tab., 13 figs.

  15. Schrödinger's code-script: not a genetic cipher but a code of development.

    Science.gov (United States)

    Walsby, A E; Hodge, M J S

    2017-06-01

    In his book What is Life? Erwin Schrödinger coined the term 'code-script', thought by some to be the first published suggestion of a hereditary code and perhaps a forerunner of the genetic code. The etymology of 'code' suggests three meanings relevant to 'code-script which we distinguish as 'cipher-code', 'word-code' and 'rule-code'. Cipher-codes and word-codes entail translation of one set of characters into another. The genetic code comprises not one but two cipher-codes: the first is the DNA 'base-pairing cipher'; the second is the 'nucleotide-amino-acid cipher', which involves the translation of DNA base sequences into amino-acid sequences. We suggest that Schrödinger's code-script is a form of 'rule-code', a set of rules that, like the 'highway code' or 'penal code', requires no translation of a message. Schrödinger first relates his code-script to chromosomal genes made of protein. Ignorant of its properties, however, he later abandons 'protein' and adopts in its place a hypothetical, isomeric 'aperiodic solid' whose atoms he imagines rearranged in countless different conformations, which together are responsible for the patterns of ontogenetic development. In an attempt to explain the large number of combinations required, Schrödinger referred to the Morse code (a cipher) but in doing so unwittingly misled readers into believing that he intended a cipher-code resembling the genetic code. We argue that the modern equivalent of Schrödinger's code-script is a rule-code of organismal development based largely on the synthesis, folding, properties and interactions of numerous proteins, each performing a specific task. Copyright © 2016. Published by Elsevier Ltd.

  16. Code source sans code: le cas de l'ENIAC.

    OpenAIRE

    De Mol, Liesbeth

    2016-01-01

    Doctoral; Qu’est-ce qu’un programme ? Qu’est-ce qu’un code (source) ? Est-ce un « texte » susceptible d’une analyse littéraire ou plutôt quelque chose qui réside dans les circuit électroniques de l’ordinateur ? Est-ce un objet technique ou formel ? D’un certain point de vue, ces questions purement philosophiques ont peu à voir avec une réalité quelle qu’elle soit. Mais en informatique, ces questions sont au cœur de la discipline et les réponses qu’on leur donne déterminent des décisions qui a...

  17. Code of ethics: principles for ethical leadership.

    Science.gov (United States)

    Flite, Cathy A; Harman, Laurinda B

    2013-01-01

    The code of ethics for a professional association incorporates values, principles, and professional standards. A review and comparative analysis of a 1934 pledge and codes of ethics from 1957, 1977, 1988, 1998, 2004, and 2011 for a health information management association was conducted. Highlights of some changes in the healthcare delivery system are identified as a general context for the codes of ethics. The codes of ethics are examined in terms of professional values and changes in the language used to express the principles of the various codes.

  18. Feature coding for image representation and recognition

    CERN Document Server

    Huang, Yongzhen

    2015-01-01

    This brief presents a comprehensive introduction to feature coding, which serves as a key module for the typical object recognition pipeline. The text offers a rich blend of theory and practice while reflects the recent developments on feature coding, covering the following five aspects: (1) Review the state-of-the-art, analyzing the motivations and mathematical representations of various feature coding methods; (2) Explore how various feature coding algorithms evolve along years; (3) Summarize the main characteristics of typical feature coding algorithms and categorize them accordingly; (4) D

  19. Structured LDPC Codes over Integer Residue Rings

    Directory of Open Access Journals (Sweden)

    Marc A. Armand

    2008-07-01

    Full Text Available This paper presents a new class of low-density parity-check (LDPC codes over ℤ2a represented by regular, structured Tanner graphs. These graphs are constructed using Latin squares defined over a multiplicative group of a Galois ring, rather than a finite field. Our approach yields codes for a wide range of code rates and more importantly, codes whose minimum pseudocodeword weights equal their minimum Hamming distances. Simulation studies show that these structured codes, when transmitted using matched signal sets over an additive-white-Gaussian-noise channel, can outperform their random counterparts of similar length and rate.

  20. Structured LDPC Codes over Integer Residue Rings

    Directory of Open Access Journals (Sweden)

    Mo Elisa

    2008-01-01

    Full Text Available Abstract This paper presents a new class of low-density parity-check (LDPC codes over represented by regular, structured Tanner graphs. These graphs are constructed using Latin squares defined over a multiplicative group of a Galois ring, rather than a finite field. Our approach yields codes for a wide range of code rates and more importantly, codes whose minimum pseudocodeword weights equal their minimum Hamming distances. Simulation studies show that these structured codes, when transmitted using matched signal sets over an additive-white-Gaussian-noise channel, can outperform their random counterparts of similar length and rate.

  1. Protograph LDPC Codes Over Burst Erasure Channels

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.

  2. Optimal Codes for the Burst Erasure Channel

    Science.gov (United States)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure

  3. Monomial codes seen as invariant subspaces

    Directory of Open Access Journals (Sweden)

    García-Planas María Isabel

    2017-08-01

    Full Text Available It is well known that cyclic codes are very useful because of their applications, since they are not computationally expensive and encoding can be easily implemented. The relationship between cyclic codes and invariant subspaces is also well known. In this paper a generalization of this relationship is presented between monomial codes over a finite field and hyperinvariant subspaces of n under an appropriate linear transformation. Using techniques of Linear Algebra it is possible to deduce certain properties for this particular type of codes, generalizing known results on cyclic codes.

  4. A (72, 36; 15) box code

    Science.gov (United States)

    Solomon, G.

    1993-01-01

    A (72,36;15) box code is constructed as a 9 x 8 matrix whose columns add to form an extended BCH-Hamming (8,4;4) code and whose rows sum to odd or even parity. The newly constructed code, due to its matrix form, is easily decodable for all seven-error and many eight-error patterns. The code comes from a slight modification in the parity (eighth) dimension of the Reed-Solomon (8,4;5) code over GF(512). Error correction uses the row sum parity information to detect errors, which then become erasures in a Reed-Solomon correction algorithm.

  5. ANIMALS - INDIVIDUAL - COUNTS, SPECIES IDENTIFICATION - ORGANISM LENGTH, SPECIES IDENTIFICATION - LIFE STAGE, TAXONOMIC CODE and species abundance profile and discrete sample data collected in the South Atlantic Ocean and South Pacific Ocean on the LAURENCE M. GOULD cruises LMG0104 and LMG0203 as part of the Southern Ocean GLOBEC project from 2001-05-01 to 2002-05-10 (NODC Accession 0112167)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NODC Accession 0112167 includes profile, discrete sample and biological data collected aboard the LAURENCE M. GOULD during cruises LMG0104 and LMG0203 in the South...

  6. ANIMALS - INDIVIDUAL - COUNTS, Displacement Volume, SPECIES IDENTIFICATION - LIFE STAGE, TAXONOMIC CODE and species abundance profile and discrete sample data collected in the South Atlantic Ocean and South Pacific Ocean on the LAURENCE M. GOULD cruises LMG0104 and LMG0203 as part of the Southern Ocean GLOBEC project from 2001-05-01 to 2002-05-10 (NODC Accession 0112170)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NODC Accession 0112170 includes profile, discrete sample and biological data collected aboard the LAURENCE M. GOULD during cruises LMG0104 and LMG0203 in the South...

  7. ANIMALS - INDIVIDUAL - COUNTS, Displacement Volume, SPECIES IDENTIFICATION - LIFE STAGE, TAXONOMIC CODE and species abundance profile and discrete sample data collected in the South Atlantic Ocean and South Pacific Ocean on the NATHANIEL B. PALMER cruises NBP0103, NBP0104 and others as part of the Southern Ocean GLOBEC project from 2001-04-30 to 2002-09-11 (NODC Accession 0112171)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NODC Accession 0112171 includes profile, discrete sample and biological data collected aboard the NATHANIEL B. PALMER during cruises NBP0103, NBP0104, NBP0202 and...

  8. Baby milk companies accused of breaching marketing code.

    Science.gov (United States)

    Wise, J

    1997-01-18

    A consortium of 27 religious and health organizations has released a report entitled "Cracking the Code," which criticizes the bottle-feeding marketing techniques used by Nestle, Gerber, Mead Johnson, Wyeth, and Nutricia. Research for the report was carried out in Thailand, Bangladesh, South Africa, and Poland using a random sample of 800 mothers and 120 health workers in each country. In all 4 sites, women had received information that violated the World Health Organization's 1981 international code of marketing breast milk substitutes. Violations included promoting artificial feeding without recognizing breast feeding as the best source of infant nutrition. The investigation also found that women and health workers in all 4 sites received free samples of artificial milk. The report includes detailed examples of manufacturer representatives making unrequested visits to give product information to mothers, providing incentives to health workers to promote products, and promoting products outside of health care facilities. While the International Association of Infant Food Manufacturers condemned the study as biased, the Nestle company promised to review the allegations contained in the report and to deal with any breaches in the code. The Interagency Group on Breastfeeding Monitoring, which prepared the report, was created in 1994 to provide data to groups supporting a boycott of Nestle for code violations.

  9. Structure-aware Local Sparse Coding for Visual Tracking

    KAUST Repository

    Qi, Yuankai

    2018-01-24

    Sparse coding has been applied to visual tracking and related vision problems with demonstrated success in recent years. Existing tracking methods based on local sparse coding sample patches from a target candidate and sparsely encode these using a dictionary consisting of patches sampled from target template images. The discriminative strength of existing methods based on local sparse coding is limited as spatial structure constraints among the template patches are not exploited. To address this problem, we propose a structure-aware local sparse coding algorithm which encodes a target candidate using templates with both global and local sparsity constraints. For robust tracking, we show local regions of a candidate region should be encoded only with the corresponding local regions of the target templates that are the most similar from the global view. Thus, a more precise and discriminative sparse representation is obtained to account for appearance changes. To alleviate the issues with tracking drifts, we design an effective template update scheme. Extensive experiments on challenging image sequences demonstrate the effectiveness of the proposed algorithm against numerous stateof- the-art methods.

  10. Diletter circular codes over finite alphabets.

    Science.gov (United States)

    Fimmel, Elena; Michel, Christian J; Strüngmann, Lutz

    2017-10-09

    The graph approach of circular codes recently developed (Fimmel et al.,2016) allows here a detailed study of diletter circular codes over finite alphabets. A new class of circular codes is identified, strong comma-free codes. New theorems are proved with the diletter circular codes of maximal length in relation to (i) a characterisation of their graphs as acyclic tournaments; (ii) their explicit description; and (iii) the non-existence of other maximal diletter circular codes. The maximal lengths of paths in the graphs of the comma-free and strong comma-free codes are determined. Furthermore, for the first time, diletter circular codes are enumerated over finite alphabets. Biological consequences of dinucleotide circular codes are analysed with respect to their embedding in the trinucleotide circular code X identified in genes and to the periodicity modulo 2 observed in introns. An evolutionary hypothesis of circular codes is also proposed according to their combinatorial properties. Copyright © 2017. Published by Elsevier Inc.

  11. Layered Wyner-Ziv video coding.

    Science.gov (United States)

    Xu, Qian; Xiong, Zixiang

    2006-12-01

    Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks.

  12. Polynomial theory of error correcting codes

    CERN Document Server

    Cancellieri, Giovanni

    2015-01-01

    The book offers an original view on channel coding, based on a unitary approach to block and convolutional codes for error correction. It presents both new concepts and new families of codes. For example, lengthened and modified lengthened cyclic codes are introduced as a bridge towards time-invariant convolutional codes and their extension to time-varying versions. The novel families of codes include turbo codes and low-density parity check (LDPC) codes, the features of which are justified from the structural properties of the component codes. Design procedures for regular LDPC codes are proposed, supported by the presented theory. Quasi-cyclic LDPC codes, in block or convolutional form, represent one of the most original contributions of the book. The use of more than 100 examples allows the reader gradually to gain an understanding of the theory, and the provision of a list of more than 150 definitions, indexed at the end of the book, permits rapid location of sought information.

  13. Multiplexed coding in the human basal ganglia

    Science.gov (United States)

    Andres, D. S.; Cerquetti, D.; Merello, M.

    2016-04-01

    A classic controversy in neuroscience is whether information carried by spike trains is encoded by a time averaged measure (e.g. a rate code), or by complex time patterns (i.e. a time code). Here we apply a tool to quantitatively analyze the neural code. We make use of an algorithm based on the calculation of the temporal structure function, which permits to distinguish what scales of a signal are dominated by a complex temporal organization or a randomly generated process. In terms of the neural code, this kind of analysis makes it possible to detect temporal scales at which a time patterns coding scheme or alternatively a rate code are present. Additionally, finding the temporal scale at which the correlation between interspike intervals fades, the length of the basic information unit of the code can be established, and hence the word length of the code can be found. We apply this algorithm to neuronal recordings obtained from the Globus Pallidus pars interna from a human patient with Parkinson’s disease, and show that a time pattern coding and a rate coding scheme co-exist at different temporal scales, offering a new example of multiplexed neuronal coding.

  14. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Model code provisions for use in partially accepted code jurisdictions. 200.926c Section 200.926c Housing and Urban Development Regulations... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code...

  15. On the Performance of a Multi-Edge Type LDPC Code for Coded Modulation

    NARCIS (Netherlands)

    Cronie, H.S.

    2005-01-01

    We present a method to combine error-correction coding and spectral-efficient modulation for transmission over the Additive White Gaussian Noise (AWGN) channel. The code employs signal shaping which can provide a so-called shaping gain. The code belongs to the family of sparse graph codes for which

  16. 76 FR 77549 - Lummi Nation-Title 20-Code of Laws-Liquor Code

    Science.gov (United States)

    2011-12-13

    ...--Code of Laws--Liquor Code. The Code regulates and controls the possession, sale and consumption of... this Code allows for the possession and sale of alcoholic beverages within the Lummi Nation's..., Public Law 83-277, 67 Stat. 586, 18 U.S.C. 1161, as interpreted by the Supreme Court in Rice v. Rehner...

  17. Using Coding Apps to Support Literacy Instruction and Develop Coding Literacy

    Science.gov (United States)

    Hutchison, Amy; Nadolny, Larysa; Estapa, Anne

    2016-01-01

    In this article the authors present the concept of Coding Literacy and describe the ways in which coding apps can support the development of Coding Literacy and disciplinary and digital literacy skills. Through detailed examples, we describe how coding apps can be integrated into literacy instruction to support learning of the Common Core English…

  18. Understanding Mixed Code and Classroom Code-Switching: Myths and Realities

    Science.gov (United States)

    Li, David C. S.

    2008-01-01

    Background: Cantonese-English mixed code is ubiquitous in Hong Kong society, and yet using mixed code is widely perceived as improper. This paper presents evidence of mixed code being socially constructed as bad language behavior. In the education domain, an EDB guideline bans mixed code in the classroom. Teachers are encouraged to stick to…

  19. Elastic Functional Coding of Riemannian Trajectories.

    Science.gov (United States)

    Anirudh, Rushil; Turaga, Pavan; Jingyong Su; Srivastava, Anuj

    2017-05-01

    such coding efficiently captures trajectories in applications such as action recognition, stroke rehabilitation, visual speech recognition, clustering and diverse sequence sampling. Using this framework, we obtain state-of-the-art recognition results, while reducing the dimensionality/ complexity by a factor of 100-250x. Since these mappings and codes are invertible, they can also be used to interactively-visualize Riemannian trajectories and synthesize actions.

  20. Measuring Source Code Similarity Using Reference Vectors

    Science.gov (United States)

    Ohno, Asako; Murao, Hajime

    In this paper, we propose a novel method to measure similarity between program source codes. Different to others, our method doues not compare two source codes directly but compares two reference vectors, where a reference vector is calculated from one source code and a set of reference source codes. This means that our method requires no original source code when considering an application open to public such as a search engine for the program source code on the internet. We have built a simple search system and have evaluated with Java source codes made in the university course of basic programming. Results show that the system can achieve quite high average precision rate in very short time which means the proposed method can measure correct similarity very fast.

  1. Codes over an infinite family of algebras

    Directory of Open Access Journals (Sweden)

    - Irwansyah

    2017-01-01

    Full Text Available In this paper, we will show some properties of codes over the ring $B_k=\\mathbb{F}_p[v_1,\\dots,v_k]/(v_i^2=v_i,\\forall i=1,\\dots,k.$ These rings, form a family of commutative algebras over finite field $\\mathbb{F}_p$. We first discuss about the form of maximal ideals and characterization of automorphisms for the ring $B_k$. Then, we define certain Gray map which can be used to give a connection between codes over $B_k$ and codes over $\\mathbb{F}_p$. Using the previous connection, we give a characterization for equivalence of codes over $B_k$ and Euclidean self-dual codes. Furthermore, we give generators for invariant ring of Euclidean self-dual codes over $B_k$ through MacWilliams relation of Hamming weight enumerator for such codes.

  2. Modified BTC Algorithm for Audio Signal Coding

    Directory of Open Access Journals (Sweden)

    TOMIC, S.

    2016-11-01

    Full Text Available This paper describes modification of a well-known image coding algorithm, named Block Truncation Coding (BTC and its application in audio signal coding. BTC algorithm was originally designed for black and white image coding. Since black and white images and audio signals have different statistical characteristics, the application of this image coding algorithm to audio signal presents a novelty and a challenge. Several implementation modifications are described in this paper, while the original idea of the algorithm is preserved. The main modifications are performed in the area of signal quantization, by designing more adequate quantizers for audio signal processing. The result is a novel audio coding algorithm, whose performance is presented and analyzed in this research. The performance analysis indicates that this novel algorithm can be successfully applied in audio signal coding.

  3. Code Blue evaluation in children's hospital.

    Science.gov (United States)

    Sahin, Kubra Evren; Ozdinc, Oktay Zeki; Yoldas, Suna; Goktay, Aylin; Dorak, Selda

    2016-01-01

    True alarm rate of the Code Blue cases is at a low level in the Dr. Behçet Uz Children's Hospital in İzmir. This study aims to analyse the use of the Code Blue alarm cases in the children's hospital. This retrospective clinical study evaluated the age and the gender of the cases, the arriving time of the Code Blue team, the date and time of the Code Blue Call, the reasons of the Code Blue Call, and the verification which were all obtained from the Code Blue forms of the hospital dated between January 2014 and January 2015. The data of 139 Code Blue cases' forms were investigated and was divided into two groups: before and after the education containing 88 and 51 cases, respectively. Conversive disorder (26% to 13%, Pode Blue cases were false calls with female greater than male (Pode that is to say pre-diagnosis team should be formed.

  4. Coding In-depth Semistructured Interviews

    DEFF Research Database (Denmark)

    Campbell, John L.; Quincy, Charles; Osserman, Jordan

    2013-01-01

    Many social science studies are based on coded in-depth semistructured interview transcripts. But researchers rarely report or discuss coding reliability in this work. Nor is there much literature on the subject for this type of data. This article presents a procedure for developing coding schemes...... for such data. It involves standardizing the units of text on which coders work and then improving the coding scheme’s discriminant capability (i.e., reducing coding errors) to an acceptable point as indicated by measures of either intercoder reliability or intercoder agreement. This approach is especially...... useful for situations where a single knowledgeable coder will code all the transcripts once the coding scheme has been established. This approach can also be used with other types of qualitative data and in other circumstances....

  5. Content Layer progressive Coding of Digital Maps

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Jensen, Ole Riis

    2002-01-01

    for calculating the resulting number of contexts are given. The new methods outperform existing schemes coding digital maps and in addition provide progressive coding. Compared to the state-of-the-art PWC coder, the compressed size is reduced to 50-70% on our layered map test images.......A new lossless context based method is presented for content progressive coding of limited bits/pixel images, such as maps, company logos, etc., common on the World Wide Web. Progressive encoding is achieved by encoding the image in content layers based on color level or other predefined...... information. Information from already coded layers are used when coding subsequent layers. This approach is combined with efficient template based context bilevel coding, context collapsing methods for multilevel images and arithmetic coding. Relative pixel patterns are used to collapse contexts. Expressions...

  6. Functions of code switching in multilingual classrooms

    Directory of Open Access Journals (Sweden)

    Ondene van Dulm

    2011-08-01

    Full Text Available The research reported in this paper focuses on the functions of code switching between English and Afrikaans in the classroom interactions of a secondary school in the Western Cape. The data comprising audio recordings of classroom interactions are analysed within the framework of Myers-Scotton’s (1993a Markedness Model, according to which there are four types of code switching, namely marked, unmarked, sequential unmarked, and exploratory code switching. Within each of these types of code switching, a number of specific functions of code switching in the classrooms observed are identified, such as expansion, clarification, and identity marking. The study concludes that the Markedness Model offers a useful framework within which to analyse types of code switching, and that code switching has a specific functional role to play within multicultural and multilingual classrooms.

  7. Evaluation of help model replacement codes

    Energy Technology Data Exchange (ETDEWEB)

    Whiteside, Tad [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hang, Thong [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Flach, Gregory [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2009-07-01

    This work evaluates the computer codes that are proposed to be used to predict percolation of water through the closure-cap and into the waste containment zone at the Department of Energy closure sites. This work compares the currently used water-balance code (HELP) with newly developed computer codes that use unsaturated flow (Richards’ equation). It provides a literature review of the HELP model and the proposed codes, which result in two recommended codes for further evaluation: HYDRUS-2D3D and VADOSE/W. This further evaluation involved performing actual simulations on a simple model and comparing the results of those simulations to those obtained with the HELP code and the field data. From the results of this work, we conclude that the new codes perform nearly the same, although moving forward, we recommend HYDRUS-2D3D.

  8. Structural reliability codes for probabilistic design

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    1997-01-01

    difficulties of ambiguity and definition show up when attempting to make the transition from a given authorized partial safety factor code to a superior probabilistic code. For any chosen probabilistic code format there is a considerable variation of the reliability level over the set of structures defined....... The last problem must be accepted as the state of the matter and it seems that it can only be solved pragmatically by standardizing a specific code format as reference format for constant reliability. By an example this paper illustrates that a presently valid partial safety factor code imposes a quite...... is suggested for guiding the choice of the reference probabilistic code format for constant reliability. In the author's opinion there is an urgent need for establishing a standard probabilistic reliability code. This paper presents some considerations that may be debatable, but nevertheless point...

  9. Genetic code, hamming distance and stochastic matrices.

    Science.gov (United States)

    He, Matthew X; Petoukhov, Sergei V; Ricci, Paolo E

    2004-09-01

    In this paper we use the Gray code representation of the genetic code C=00, U=10, G=11 and A=01 (C pairs with G, A pairs with U) to generate a sequence of genetic code-based matrices. In connection with these code-based matrices, we use the Hamming distance to generate a sequence of numerical matrices. We then further investigate the properties of the numerical matrices and show that they are doubly stochastic and symmetric. We determine the frequency distributions of the Hamming distances, building blocks of the matrices, decomposition and iterations of matrices. We present an explicit decomposition formula for the genetic code-based matrix in terms of permutation matrices, which provides a hypercube representation of the genetic code. It is also observed that there is a Hamiltonian cycle in a genetic code-based hypercube.

  10. Teacher’s Use of Code Switching in the Classroom and Its Implications on Students’ Score

    Directory of Open Access Journals (Sweden)

    Clara Herlina K.

    2007-11-01

    Full Text Available Code switching is usually done by people who have mastered two languages well. Among the people who can fulfill these criteria are Indonesians who teach English. In teaching English to Indonesian students, English teachers do not always use English as the medium of instruction, they usually code switch to Indonesian. Research focuses on the teachers as the subjects who apply code switching in the classroom. The respondents are eight lecturers in Bina Nusantara University who teach English to non-English department students. This research analyses the speech of the teachers to find out the percentage of code switching and the uses of code switching in the classroom. Finally, the relation between code switching and the students’ scores is calculated using independent samples T-test.  

  11. Network Coded Software Defined Networking

    DEFF Research Database (Denmark)

    Hansen, Jonas; Roetter, Daniel Enrique Lucani; Krigslund, Jeppe

    2015-01-01

    . The inherent flexibility of both SDN and NC provides fertile ground to envision more efficient, robust, and secure networking designs, which may also incorporate content caching and storage, all of which are key challenges of the upcoming 5G networks. This article not only proposes the fundamentals......Software defined networking has garnered large attention due to its potential to virtualize services in the Internet, introducing flexibility in the buffering, scheduling, processing, and routing of data in network routers. SDN breaks the deadlock that has kept Internet network protocols stagnant...... for decades, while applications and physical links have evolved. This article advocates for the use of SDN to bring about 5G network services by incorporating network coding (NC) functionalities. The latter constitutes a major leap forward compared to the state-of-the- art store and forward Internet paradigm...

  12. Distributed Video Coding: Iterative Improvements

    DEFF Research Database (Denmark)

    Luong, Huynh Van

    at the decoder side offering such benefits for these applications. Although there have been some advanced improvement techniques, improving the DVC coding efficiency is still challenging. The thesis addresses this challenge by proposing several iterative algorithms at different working levels, e.g. bitplane......, band, and frame levels. In order to show the information theoretic basis, theoretical foundations of DVC are introduced. The first proposed algorithm applies parallel iterative decoding using multiple LDPC decoders to utilize cross bitplane correlation. To improve Side Information (SI) generation...... and noise modeling and also learn from the previous decoded Wyner-Ziv (WZ) frames, side information and noise learning (SING) is proposed. The SING scheme introduces an optical flow technique to compensate the weaknesses of the block based SI generation and also utilizes clustering of DCT blocks to capture...

  13. Advanced Code for Photocathode Design

    Energy Technology Data Exchange (ETDEWEB)

    Ives, Robert Lawrence [Calabazas Creek Research, Inc., San Mateo, CA (United States); Jensen, Kevin [Naval Research Lab. (NRL), Washington, DC (United States); Montgomery, Eric [Univ. of Maryland, College Park, MD (United States); Bui, Thuc [Calabazas Creek Research, Inc., San Mateo, CA (United States)

    2015-12-15

    The Phase I activity demonstrated that PhotoQE could be upgraded and modified to allow input using a graphical user interface. Specific calls to platform-dependent (e.g. IMSL) function calls were removed, and Fortran77 components were rewritten for Fortran95 compliance. The subroutines, specifically the common block structures and shared data parameters, were reworked to allow the GUI to update material parameter data, and the system was targeted for desktop personal computer operation. The new structures overcomes the previous rigid and unmodifiable library structures by implementing new, materials library data sets and repositioning the library values to external files. Material data may originate from published literature or experimental measurements. Further optimization and restructuring would allow custom and specific emission models for beam codes that rely on parameterized photoemission algorithms. These would be based on simplified and parametric representations updated and extended from previous versions (e.g., Modified Fowler-Dubridge, Modified Three-Step, etc.).

  14. Quantum coding with finite resources

    Science.gov (United States)

    Tomamichel, Marco; Berta, Mario; Renes, Joseph M.

    2016-01-01

    The quantum capacity of a memoryless channel determines the maximal rate at which we can communicate reliably over asymptotically many uses of the channel. Here we illustrate that this asymptotic characterization is insufficient in practical scenarios where decoherence severely limits our ability to manipulate large quantum systems in the encoder and decoder. In practical settings, we should instead focus on the optimal trade-off between three parameters: the rate of the code, the size of the quantum devices at the encoder and decoder, and the fidelity of the transmission. We find approximate and exact characterizations of this trade-off for various channels of interest, including dephasing, depolarizing and erasure channels. In each case, the trade-off is parameterized by the capacity and a second channel parameter, the quantum channel dispersion. In the process, we develop several bounds that are valid for general quantum channels and can be computed for small instances. PMID:27156995

  15. Monte Carlo simulation code modernization

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    The continual development of sophisticated transport simulation algorithms allows increasingly accurate description of the effect of the passage of particles through matter. This modelling capability finds applications in a large spectrum of fields from medicine to astrophysics, and of course HEP. These new capabilities however come at the cost of a greater computational intensity of the new models, which has the effect of increasing the demands of computing resources. This is particularly true for HEP, where the demand for more simulation are driven by the need of both more accuracy and more precision, i.e. better models and more events. Usually HEP has relied on the "Moore's law" evolution, but since almost ten years the increase in clock speed has withered and computing capacity comes in the form of hardware architectures of many-core or accelerated processors. To harness these opportunities we need to adapt our code to concurrent programming models taking advantages of both SIMD and SIMT architectures. Th...

  16. List Decoding of Algebraic Codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde

    We investigate three paradigms for polynomial-time decoding of Reed–Solomon codes beyond half the minimum distance: the Guruswami–Sudan algorithm, Power decoding and the Wu algorithm. The main results concern shaping the computational core of all three methods to a problem solvable by module...... minimisation; by applying the fastest known algorithms for this general problem, we then obtain realisations of each paradigm which are as fast or faster than all previously known methods. An element of this is the “2D key equation”, a heavily generalised form of the classical key equation, and we show how...... to solve such using module minimisation, or using our new Demand–Driven algorithm which is also based on module minimisation. The decoding paradigms are all derived and analysed in a self-contained manner, often in new ways or examined in greater depth than previously. Among a number of new results, we...

  17. ... An example of flawed code

    CERN Multimedia

    Computer Security Team

    2011-01-01

    Do you recall our small exercise in the last issue of the Bulletin?   We were wondering how well written the following code was:     1 /* Safely Exec program: drop privileges to user uid and group 2 * gid, and use chroot to restrict file system access to jail 3 * directory. Also, don’t allow program to run as a 4 * privileged user or group */ 5 void ExecUid(int uid, int gid, char *jailDir, char *prog, char *const argv[]) 6 { 7 if (uid == 0 || gid == 0) { 8 FailExit(“ExecUid: root uid or gid not allowed”); 9 } 10 11 chroot(jailDir); /* restrict access to this dir */ 12 13 setuid(uid); /* drop privs */ 14 setgid(gid); 15 16 fprintf(LOGFILE, “Execvp of %s as uid=%d gid=%d\

  18. New constructions of MDS codes with complementary duals

    OpenAIRE

    Chen, Bocong; Liu, Hongwei

    2017-01-01

    Linear complementary-dual (LCD for short) codes are linear codes that intersect with their duals trivially. LCD codes have been used in certain communication systems. It is recently found that LCD codes can be applied in cryptography. This application of LCD codes renewed the interest in the construction of LCD codes having a large minimum distance. MDS codes are optimal in the sense that the minimum distance cannot be improved for given length and code size. Constructing LCD MDS codes is thu...

  19. An Efficient Partial Sums Generator for Constituent Code based Successive Cancellation Decoding of Polar Codes

    OpenAIRE

    Che, Tiben; Choi, Gwan

    2016-01-01

    This paper proposes the architecture of partial sum generator for constituent codes based polar code decoder. Constituent codes based polar code decoder has the advantage of low latency. However, no purposefully designed partial sum generator design exists that can yield desired timing for the decoder. We first derive the mathematical presentation with the partial sums set $\\bm{\\beta^c}$ which is corresponding to each constituent codes. From this, we concoct a shift-register based partial sum...

  20. Interface requirements to couple thermal-hydraulic codes to 3D neutronic codes

    Energy Technology Data Exchange (ETDEWEB)

    Langenbuch, S.; Austregesilo, H.; Velkov, K. [GRS, Garching (Germany)] [and others

    1997-07-01

    The present situation of thermalhydraulics codes and 3D neutronics codes is briefly described and general considerations for coupling of these codes are discussed. Two different basic approaches of coupling are identified and their relative advantages and disadvantages are discussed. The implementation of the coupling for 3D neutronics codes in the system ATHLET is presented. Meanwhile, this interface is used for coupling three different 3D neutronics codes.