WorldWideScience

Sample records for translating source codes

  1. Serial-data correlator/code translator

    Science.gov (United States)

    Morgan, L. E.

    1977-01-01

    System, consisting of sampling flip flop, memory (either RAM or ROM), and memory buffer, correlates sampled data with predetermined acceptance code patterns, translates acceptable code patterns to nonreturn-to-zero code, and identifies data dropouts.

  2. Binary translation using peephole translation rules

    Science.gov (United States)

    Bansal, Sorav; Aiken, Alex

    2010-05-04

    An efficient binary translator uses peephole translation rules to directly translate executable code from one instruction set to another. In a preferred embodiment, the translation rules are generated using superoptimization techniques that enable the translator to automatically learn translation rules for translating code from the source to target instruction set architecture.

  3. Translation of ARAC computer codes

    International Nuclear Information System (INIS)

    Takahashi, Kunio; Chino, Masamichi; Honma, Toshimitsu; Ishikawa, Hirohiko; Kai, Michiaki; Imai, Kazuhiko; Asai, Kiyoshi

    1982-05-01

    In 1981 we have translated the famous MATHEW, ADPIC and their auxiliary computer codes for CDC 7600 computer version to FACOM M-200's. The codes consist of a part of the Atmospheric Release Advisory Capability (ARAC) system of Lawrence Livermore National Laboratory (LLNL). The MATHEW is a code for three-dimensional wind field analysis. Using observed data, it calculates the mass-consistent wind field of grid cells by a variational method. The ADPIC is a code for three-dimensional concentration prediction of gases and particulates released to the atmosphere. It calculates concentrations in grid cells by the particle-in-cell method. They are written in LLLTRAN, i.e., LLNL Fortran language and are implemented on the CDC 7600 computers of LLNL. In this report, i) the computational methods of the MATHEW/ADPIC and their auxiliary codes, ii) comparisons of the calculated results with our JAERI particle-in-cell, and gaussian plume models, iii) translation procedures from the CDC version to FACOM M-200's, are described. Under the permission of LLNL G-Division, this report is published to keep the track of the translation procedures and to serve our JAERI researchers for comparisons and references of their works. (author)

  4. Serbian translation of French Code of Civil Procedure from 1837: Part two: Legal terminology of the translation

    Directory of Open Access Journals (Sweden)

    Stanković Uroš N.

    2015-01-01

    Full Text Available The article deals with legal terms appearing in Serbian translation of French Code of Civil Procedure (Code de procédure civile, 1806 authored by Serbian writer and politician Lazar Zuban (1795-1850. The author made an attempt to determine whether the terms used by Zuban had existed in historical sources previous to the translator's work. If so, it would mean that Zuban was using already existing technical terms. In cases in which he failed to find certain legal term in texts older than Zuban's work, the author tried to establish if the unfound term had been the translator's invention. As to the terms of civil law, Zuban mostly took over words already present in Serbian vocabulary at the time. This fact is easily explainable: family, property, contracts, torts, inheritage are very present in people's everyday life, which brought about terminology of civil law to be relatively developed. On the contrary, terms belonging to the civil procedure were scarce because judiciary and court procedure in the time of Zuban's work were still being on rudimentary level. That is the reason why the translator had to forge his own legal terms. Zuban did not translate German legal terms (the translator used German translation of a Code as protograph mechanically; he was making effort to fathom the meaning of a word in question and find its adequate Serbian equivalent. In some cases that effort was fruitful. Nevertheless, in a long term Zuban's labor was in vain, as none of his forged words survived in Serbian legal terminology.

  5. Ribosome Profiling Reveals Pervasive Translation Outside of Annotated Protein-Coding Genes

    Directory of Open Access Journals (Sweden)

    Nicholas T. Ingolia

    2014-09-01

    Full Text Available Ribosome profiling suggests that ribosomes occupy many regions of the transcriptome thought to be noncoding, including 5′ UTRs and long noncoding RNAs (lncRNAs. Apparent ribosome footprints outside of protein-coding regions raise the possibility of artifacts unrelated to translation, particularly when they occupy multiple, overlapping open reading frames (ORFs. Here, we show hallmarks of translation in these footprints: copurification with the large ribosomal subunit, response to drugs targeting elongation, trinucleotide periodicity, and initiation at early AUGs. We develop a metric for distinguishing between 80S footprints and nonribosomal sources using footprint size distributions, which validates the vast majority of footprints outside of coding regions. We present evidence for polypeptide production beyond annotated genes, including the induction of immune responses following human cytomegalovirus (HCMV infection. Translation is pervasive on cytosolic transcripts outside of conserved reading frames, and direct detection of this expanded universe of translated products enables efforts at understanding how cells manage and exploit its consequences.

  6. DOMESTICATION IN THE TRANSLATION OF D. BROWN’S "THE DA VINCI CODE"

    Directory of Open Access Journals (Sweden)

    Gintarė Aleknavičiūtė

    2013-10-01

    Full Text Available Literary translation is one of the most widely discussed topics in Translation Studies. There are different opinions and approaches to literary translation. On the one hand, some theorists and translators suggest that linguistic aspects such as syntax, lexis, etc., are of great importance to literary translation; one must keep to the rules of the target language without digression from the original meaning, after all. On the other hand, some scholars believe these factors are insignificant, because turning translation into a linguistic exercise undermines the more important textual, cultural, and situational factors (Leonardi 2000. However, the application of Grice’s Cooperative Principle to literary translation allows the mixture of both the linguistic aspects and all that is left beyond the meaning. The study was inspired by Kirsten Malmkjaer, Gideon Toury and Kristina Shaffner’s debate on Norms, Maxims and Conventions in Translation Studies and Pragmatics (Shaffner 1999. The aim of the article is to analyse the Lithuanian translation of D. Brown’s "The Da Vinci Code" within the framework of Grice’s Cooperative Principle and the strategy of domestication by reviewing domestication and foreignization and introducing Grice’s Cooperative Principle. The research proves that even though it is virtually impossible for a translator to convey the meaning of the source text exactly as it is given, the insufficient use of domestication in the Lithuanian translation of "The Da Vinci Code" emphasises the presence of the translator and disrupts the ease of reading.

  7. The Journey of a Source Line: How your Code is Translated into a Controlled Flow of Electrons

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    In this series we help you understand the bits and pieces that make your code command the underlying hardware. A multitude of layers translate and optimize source code, written in compiled and interpreted programming languages such as C++, Python or Java, to machine language. We explain the role and behavior of the layers in question in a typical usage scenario. While our main focus is on compilers and interpreters, we also talk about other facilities - such as the operating system, instruction sets and instruction decoders.   Biographie: Andrzej Nowak runs TIK Services, a technology and innovation consultancy based in Geneva, Switzerland. In the recent past, he co-founded and sold an award-winning Fintech start-up focused on peer-to-peer lending. Earlier, Andrzej worked at Intel and in the CERN openlab. At openlab, he managed a lab collaborating with Intel and was part of the Chief Technology Office, which set up next-generation technology projects for CERN and the openlab partne...

  8. The Journey of a Source Line: How your Code is Translated into a Controlled Flow of Electrons

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    In this series we help you understand the bits and pieces that make your code command the underlying hardware. A multitude of layers translate and optimize source code, written in compiled and interpreted programming languages such as C++, Python or Java, to machine language. We explain the role and behavior of the layers in question in a typical usage scenario. While our main focus is on compilers and interpreters, we also talk about other facilities - such as the operating system, instruction sets and instruction decoders. Biographie: Andrzej Nowak runs TIK Services, a technology and innovation consultancy based in Geneva, Switzerland. In the recent past, he co-founded and sold an award-winning Fintech start-up focused on peer-to-peer lending. Earlier, Andrzej worked at Intel and in the CERN openlab. At openlab, he managed a lab collaborating with Intel and was part of the Chief Technology Office, which set up next-generation technology projects for CERN and the openlab partners.

  9. Ground Operations Aerospace Language (GOAL). Volume 4: Interpretive code translator

    Science.gov (United States)

    1973-01-01

    This specification identifies and describes the principal functions and elements of the Interpretive Code Translator which has been developed for use with the GOAL Compiler. This translator enables the user to convert a compliled GOAL program to a highly general binary format which is designed to enable interpretive execution. The translator program provides user controls which are designed to enable the selection of various output types and formats. These controls provide a means for accommodating many of the implementation options which are discussed in the Interpretive Code Guideline document. The technical design approach is given. The relationship between the translator and the GOAL compiler is explained and the principal functions performed by the Translator are described. Specific constraints regarding the use of the Translator are discussed. The control options are described. These options enable the user to select outputs to be generated by the translator and to control vrious aspects of the translation processing.

  10. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  11. Programming peptidomimetic syntheses by translating genetic codes designed de novo.

    Science.gov (United States)

    Forster, Anthony C; Tan, Zhongping; Nalam, Madhavi N L; Lin, Hening; Qu, Hui; Cornish, Virginia W; Blacklow, Stephen C

    2003-05-27

    Although the universal genetic code exhibits only minor variations in nature, Francis Crick proposed in 1955 that "the adaptor hypothesis allows one to construct, in theory, codes of bewildering variety." The existing code has been expanded to enable incorporation of a variety of unnatural amino acids at one or two nonadjacent sites within a protein by using nonsense or frameshift suppressor aminoacyl-tRNAs (aa-tRNAs) as adaptors. However, the suppressor strategy is inherently limited by compatibility with only a small subset of codons, by the ways such codons can be combined, and by variation in the efficiency of incorporation. Here, by preventing competing reactions with aa-tRNA synthetases, aa-tRNAs, and release factors during translation and by using nonsuppressor aa-tRNA substrates, we realize a potentially generalizable approach for template-encoded polymer synthesis that unmasks the substantially broader versatility of the core translation apparatus as a catalyst. We show that several adjacent, arbitrarily chosen sense codons can be completely reassigned to various unnatural amino acids according to de novo genetic codes by translating mRNAs into specific peptide analog polymers (peptidomimetics). Unnatural aa-tRNA substrates do not uniformly function as well as natural substrates, revealing important recognition elements for the translation apparatus. Genetic programming of peptidomimetic synthesis should facilitate mechanistic studies of translation and may ultimately enable the directed evolution of small molecules with desirable catalytic or pharmacological properties.

  12. Experimental annotation of post-translational features and translated coding regions in the pathogen Salmonella Typhimurium

    Energy Technology Data Exchange (ETDEWEB)

    Ansong, Charles; Tolic, Nikola; Purvine, Samuel O.; Porwollik, Steffen; Jones, Marcus B.; Yoon, Hyunjin; Payne, Samuel H.; Martin, Jessica L.; Burnet, Meagan C.; Monroe, Matthew E.; Venepally, Pratap; Smith, Richard D.; Peterson, Scott; Heffron, Fred; Mcclelland, Michael; Adkins, Joshua N.

    2011-08-25

    Complete and accurate genome annotation is crucial for comprehensive and systematic studies of biological systems. For example systems biology-oriented genome scale modeling efforts greatly benefit from accurate annotation of protein-coding genes to develop proper functioning models. However, determining protein-coding genes for most new genomes is almost completely performed by inference, using computational predictions with significant documented error rates (> 15%). Furthermore, gene prediction programs provide no information on biologically important post-translational processing events critical for protein function. With the ability to directly measure peptides arising from expressed proteins, mass spectrometry-based proteomics approaches can be used to augment and verify coding regions of a genomic sequence and importantly detect post-translational processing events. In this study we utilized “shotgun” proteomics to guide accurate primary genome annotation of the bacterial pathogen Salmonella Typhimurium 14028 to facilitate a systems-level understanding of Salmonella biology. The data provides protein-level experimental confirmation for 44% of predicted protein-coding genes, suggests revisions to 48 genes assigned incorrect translational start sites, and uncovers 13 non-annotated genes missed by gene prediction programs. We also present a comprehensive analysis of post-translational processing events in Salmonella, revealing a wide range of complex chemical modifications (70 distinct modifications) and confirming more than 130 signal peptide and N-terminal methionine cleavage events in Salmonella. This study highlights several ways in which proteomics data applied during the primary stages of annotation can improve the quality of genome annotations, especially with regards to the annotation of mature protein products.

  13. Coded aperture imaging of alpha source spatial distribution

    International Nuclear Information System (INIS)

    Talebitaher, Alireza; Shutler, Paul M.E.; Springham, Stuart V.; Rawat, Rajdeep S.; Lee, Paul

    2012-01-01

    The Coded Aperture Imaging (CAI) technique has been applied with CR-39 nuclear track detectors to image alpha particle source spatial distributions. The experimental setup comprised: a 226 Ra source of alpha particles, a laser-machined CAI mask, and CR-39 detectors, arranged inside a vacuum enclosure. Three different alpha particle source shapes were synthesized by using a linear translator to move the 226 Ra source within the vacuum enclosure. The coded mask pattern used is based on a Singer Cyclic Difference Set, with 400 pixels and 57 open square holes (representing ρ = 1/7 = 14.3% open fraction). After etching of the CR-39 detectors, the area, circularity, mean optical density and positions of all candidate tracks were measured by an automated scanning system. Appropriate criteria were used to select alpha particle tracks, and a decoding algorithm applied to the (x, y) data produced the de-coded image of the source. Signal to Noise Ratio (SNR) values obtained for alpha particle CAI images were found to be substantially better than those for corresponding pinhole images, although the CAI-SNR values were below the predictions of theoretical formulae. Monte Carlo simulations of CAI and pinhole imaging were performed in order to validate the theoretical SNR formulae and also our CAI decoding algorithm. There was found to be good agreement between the theoretical formulae and SNR values obtained from simulations. Possible reasons for the lower SNR obtained for the experimental CAI study are discussed.

  14. A generic method for automatic translation between input models for different versions of simulation codes

    International Nuclear Information System (INIS)

    Serfontein, Dawid E.; Mulder, Eben J.; Reitsma, Frederik

    2014-01-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications

  15. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)

    2014-05-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  16. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  17. Using National Drug Codes and drug knowledge bases to organize prescription records from multiple sources.

    Science.gov (United States)

    Simonaitis, Linas; McDonald, Clement J

    2009-10-01

    The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.

  18. Automatic translation of MPI source into a latency-tolerant, data-driven form

    International Nuclear Information System (INIS)

    Nguyen, Tan; Cicotti, Pietro; Bylaska, Eric; Quinlan, Dan; Baden, Scott

    2017-01-01

    Hiding communication behind useful computation is an important performance programming technique but remains an inscrutable programming exercise even for the expert. We present Bamboo, a code transformation framework that can realize communication overlap in applications written in MPI without the need to intrusively modify the source code. We reformulate MPI source into a task dependency graph representation, which partially orders the tasks, enabling the program to execute in a data-driven fashion under the control of an external runtime system. Experimental results demonstrate that Bamboo significantly reduces communication delays while requiring only modest amounts of programmer annotation for a variety of applications and platforms, including those employing co-processors and accelerators. Moreover, Bamboo’s performance meets or exceeds that of labor-intensive hand coding. As a result, the translator is more than a means of hiding communication costs automatically; it demonstrates the utility of semantic level optimization against a well-known library.

  19. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  20. QEFSM model and Markov Algorithm for translating Quran reciting rules into Braille code

    Directory of Open Access Journals (Sweden)

    Abdallah M. Abualkishik

    2015-07-01

    Full Text Available The Holy Quran is the central religious verbal text of Islam. Muslims are expected to read, understand, and apply the teachings of the Holy Quran. The Holy Quran was translated to Braille code as a normal Arabic text without having its reciting rules included. It is obvious that the users of this transliteration will not be able to recite the Quran the right way. Through this work, Quran Braille Translator (QBT presents a specific translator to translate Quran verses and their reciting rules into the Braille code. Quran Extended Finite State Machine (QEFSM model is proposed through this study as it is able to detect the Quran reciting rules (QRR from the Quran text. Basis path testing was used to evaluate the inner work for the model by checking all the test cases for the model. Markov Algorithm (MA was used for translating the detected QRR and Quran text into the matched Braille code. The data entries for QBT are Arabic letters and diacritics. The outputs of this study are seen in the double lines of Braille symbols; the first line is the proposed Quran reciting rules and the second line is for the Quran scripts.

  1. Vamos a Traducir los MRV (let's translate the VRM): linguistic and cultural inferences drawn from translating a verbal coding system from English into Spanish.

    Science.gov (United States)

    Caro, I; Stiles, W B

    1997-01-01

    Translating a verbal coding system from one language to another can yield unexpected insights into the process of communication in different cultures. This paper describes the problems and understandings we encountered as we translated a verbal response modes (VRM) taxonomy from English into Spanish. Standard translations of text (e.g., psychotherapeutic dialogue) systematically change the form of certain expressions, so supposedly equivalent expressions had different VRM codings in the two languages. Prominent examples of English forms whose translation had different codes in Spanish included tags, question forms, and "let's" expressions. Insofar as participants use such forms to convey nuances of their relationship, standard translations of counseling or psychotherapy sessions or other conversations may systematically misrepresent the relationship between the participants. The differences revealed in translating the VRM system point to subtle but important differences in the degrees of verbal directiveness and inclusion in English versus Spanish, which converge with other observations of differences in individualism and collectivism between Anglo and Hispanic cultures.

  2. Translator from the symbol coding language for the BUTs-20 processor of the in-core reactor control system

    International Nuclear Information System (INIS)

    Vorob'ev, D.M.; Golovanov, M.N.; Levin, G.L.; Parfenova, T.K.; Filatov, V.P.

    1978-01-01

    A symbolic-language code translator is described; it has been developed for automation of making up programs for in-core control systems. The translator is written in the ASSEMBLER language which is included in the software of the M-6000 computer. Two scannings of the source program are required for making up the operating program in the internal language of the BUTs-2O processor. The flowsheet and listing of the interrogation program of an analog-to-digital converter are presented. It is emphasized that the translator proposed allows a time reduction for constructing programs for the in-core control systems by a factor of 10-15 and an improvement of their quality

  3. CONNJUR spectrum translator: an open source application for reformatting NMR spectral data.

    Science.gov (United States)

    Nowling, Ronald J; Vyas, Jay; Weatherby, Gerard; Fenwick, Matthew W; Ellis, Heidi J C; Gryk, Michael R

    2011-05-01

    NMR spectroscopists are hindered by the lack of standardization for spectral data among the file formats for various NMR data processing tools. This lack of standardization is cumbersome as researchers must perform their own file conversion in order to switch between processing tools and also restricts the combination of tools employed if no conversion option is available. The CONNJUR Spectrum Translator introduces a new, extensible architecture for spectrum translation and introduces two key algorithmic improvements. This first is translation of NMR spectral data (time and frequency domain) to a single in-memory data model to allow addition of new file formats with two converter modules, a reader and a writer, instead of writing a separate converter to each existing format. Secondly, the use of layout descriptors allows a single fid data translation engine to be used for all formats. For the end user, sophisticated metadata readers allow conversion of the majority of files with minimum user configuration. The open source code is freely available at http://connjur.sourceforge.net for inspection and extension.

  4. CONNJUR spectrum translator: an open source application for reformatting NMR spectral data

    Energy Technology Data Exchange (ETDEWEB)

    Nowling, Ronald J.; Vyas, Jay [University of Connecticut Health Center, Department of Molecular, Microbial and Structural Biology (United States); Weatherby, Gerard [Western New England College, Department of Computer Science/Information Technology (United States); Fenwick, Matthew W. [University of Connecticut Health Center, Department of Molecular, Microbial and Structural Biology (United States); Ellis, Heidi J. C. [Western New England College, Department of Computer Science/Information Technology (United States); Gryk, Michael R., E-mail: gryk@uchc.edu [University of Connecticut Health Center, Department of Molecular, Microbial and Structural Biology (United States)

    2011-05-15

    NMR spectroscopists are hindered by the lack of standardization for spectral data among the file formats for various NMR data processing tools. This lack of standardization is cumbersome as researchers must perform their own file conversion in order to switch between processing tools and also restricts the combination of tools employed if no conversion option is available. The CONNJUR Spectrum Translator introduces a new, extensible architecture for spectrum translation and introduces two key algorithmic improvements. This first is translation of NMR spectral data (time and frequency domain) to a single in-memory data model to allow addition of new file formats with two converter modules, a reader and a writer, instead of writing a separate converter to each existing format. Secondly, the use of layout descriptors allows a single fid data translation engine to be used for all formats. For the end user, sophisticated metadata readers allow conversion of the majority of files with minimum user configuration. The open source code is freely available at http://connjur.sourceforge.nethttp://connjur.sourceforge.net for inspection and extension.

  5. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    Science.gov (United States)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  6. Joint source-channel coding using variable length codes

    NARCIS (Netherlands)

    Balakirsky, V.B.

    2001-01-01

    We address the problem of joint source-channel coding when variable-length codes are used for information transmission over a discrete memoryless channel. Data transmitted over the channel are interpreted as pairs (m k ,t k ), where m k is a message generated by the source and t k is a time instant

  7. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  8. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  9. Integrating source-language context into phrase-based statistical machine translation

    NARCIS (Netherlands)

    Haque, R.; Kumar Naskar, S.; Bosch, A.P.J. van den; Way, A.

    2011-01-01

    The translation features typically used in Phrase-Based Statistical Machine Translation (PB-SMT) model dependencies between the source and target phrases, but not among the phrases in the source language themselves. A swathe of research has demonstrated that integrating source context modelling

  10. The Visual Code Navigator : An Interactive Toolset for Source Code Investigation

    NARCIS (Netherlands)

    Lommerse, Gerard; Nossin, Freek; Voinea, Lucian; Telea, Alexandru

    2005-01-01

    We present the Visual Code Navigator, a set of three interrelated visual tools that we developed for exploring large source code software projects from three different perspectives, or views: The syntactic view shows the syntactic constructs in the source code. The symbol view shows the objects a

  11. Transmission imaging with a coded source

    International Nuclear Information System (INIS)

    Stoner, W.W.; Sage, J.P.; Braun, M.; Wilson, D.T.; Barrett, H.H.

    1976-01-01

    The conventional approach to transmission imaging is to use a rotating anode x-ray tube, which provides the small, brilliant x-ray source needed to cast sharp images of acceptable intensity. Stationary anode sources, although inherently less brilliant, are more compatible with the use of large area anodes, and so they can be made more powerful than rotating anode sources. Spatial modulation of the source distribution provides a way to introduce detailed structure in the transmission images cast by large area sources, and this permits the recovery of high resolution images, in spite of the source diameter. The spatial modulation is deliberately chosen to optimize recovery of image structure; the modulation pattern is therefore called a ''code.'' A variety of codes may be used; the essential mathematical property is that the code possess a sharply peaked autocorrelation function, because this property permits the decoding of the raw image cast by th coded source. Random point arrays, non-redundant point arrays, and the Fresnel zone pattern are examples of suitable codes. This paper is restricted to the case of the Fresnel zone pattern code, which has the unique additional property of generating raw images analogous to Fresnel holograms. Because the spatial frequency of these raw images are extremely coarse compared with actual holograms, a photoreduction step onto a holographic plate is necessary before the decoded image may be displayed with the aid of coherent illumination

  12. Research on Primary Shielding Calculation Source Generation Codes

    Science.gov (United States)

    Zheng, Zheng; Mei, Qiliang; Li, Hui; Shangguan, Danhua; Zhang, Guangchun

    2017-09-01

    Primary Shielding Calculation (PSC) plays an important role in reactor shielding design and analysis. In order to facilitate PSC, a source generation code is developed to generate cumulative distribution functions (CDF) for the source particle sample code of the J Monte Carlo Transport (JMCT) code, and a source particle sample code is deveoped to sample source particle directions, types, coordinates, energy and weights from the CDFs. A source generation code is developed to transform three dimensional (3D) power distributions in xyz geometry to source distributions in r θ z geometry for the J Discrete Ordinate Transport (JSNT) code. Validation on PSC model of Qinshan No.1 nuclear power plant (NPP), CAP1400 and CAP1700 reactors are performed. Numerical results show that the theoretical model and the codes are both correct.

  13. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  14. PATACSDB—the database of polyA translational attenuators in coding sequences

    Directory of Open Access Journals (Sweden)

    Malgorzata Habich

    2016-02-01

    Full Text Available Recent additions to the repertoire of gene expression regulatory mechanisms are polyadenylate (polyA tracks encoding for poly-lysine runs in protein sequences. Such tracks stall the translation apparatus and induce frameshifting independently of the effects of charged nascent poly-lysine sequence on the ribosome exit channel. As such, they substantially influence the stability of mRNA and the amount of protein produced from a given transcript. Single base changes in these regions are enough to exert a measurable response on both protein and mRNA abundance; this makes each of these sequences a potentially interesting case study for the effects of synonymous mutation, gene dosage balance and natural frameshifting. Here we present PATACSDB, a resource that contain a comprehensive list of polyA tracks from over 250 eukaryotic genomes. Our data is based on the Ensembl genomic database of coding sequences and filtered with algorithm of 12A-1 which selects sequences of polyA tracks with a minimal length of 12 A’s allowing for one mismatched base. The PATACSDB database is accessible at: http://sysbio.ibb.waw.pl/patacsdb. The source code is available at http://github.com/habich/PATACSDB, and it includes the scripts with which the database can be recreated.

  15. Code Forking, Governance, and Sustainability in Open Source Software

    OpenAIRE

    Juho Lindman; Linus Nyman

    2013-01-01

    The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibilit...

  16. Modification of a translator of mnemocode the M-6000 computer

    International Nuclear Information System (INIS)

    Kurkina, N.V.; Medved', S.V.; Pshenichnikov, O.V.; Shchirov, A.G.

    1975-01-01

    A modification of the mnemonic code translator for the M-6000 computer is described. The modified translator provides diagnostics in the source program and efficient translation-time error recovery and editing procedures. This increases the productivity of programmers labour and decreases the debugging time

  17. Code Forking, Governance, and Sustainability in Open Source Software

    Directory of Open Access Journals (Sweden)

    Juho Lindman

    2013-01-01

    Full Text Available The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibility of forking code, affects the governance and sustainability of open source initiatives on three distinct levels: software, community, and ecosystem. On the software level, the right to fork makes planned obsolescence, versioning, vendor lock-in, end-of-support issues, and similar initiatives all but impossible to implement. On the community level, forking impacts both sustainability and governance through the power it grants the community to safeguard against unfavourable actions by corporations or project leaders. On the business-ecosystem level forking can serve as a catalyst for innovation while simultaneously promoting better quality software through natural selection. Thus, forking helps keep open source initiatives relevant and presents opportunities for the development and commercialization of current and abandoned programs.

  18. Clean translation of an imperative reversible programming language

    DEFF Research Database (Denmark)

    Axelsen, Holger Bock

    2011-01-01

    We describe the translation techniques used for the code generation in a compiler from the high-level reversible imperative programming language Janus to the low-level reversible assembly language PISA. Our translation is both semantics preserving (correct), in that target programs compute exactly...... the same functions as their source programs (cleanly, with no extraneous garbage output), and efficient, in that target programs conserve the complexities of source programs. In particular, target programs only require a constant amount of temporary garbage space. The given translation methods are generic......, and should be applicable to any (imperative) reversible source language described with reversible flowcharts and reversible updates. To our knowledge, this is the first compiler between reversible languages where the source and target languages were independently developed; the first exhibiting both...

  19. On the Combination of Multi-Layer Source Coding and Network Coding for Wireless Networks

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Fitzek, Frank; Pedersen, Morten Videbæk

    2013-01-01

    quality is developed. A linear coding structure designed to gracefully encapsulate layered source coding provides both low complexity of the utilised linear coding while enabling robust erasure correction in the form of fountain coding capabilities. The proposed linear coding structure advocates efficient...

  20. Image authentication using distributed source coding.

    Science.gov (United States)

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  1. The Astrophysics Source Code Library by the numbers

    Science.gov (United States)

    Allen, Alice; Teuben, Peter; Berriman, G. Bruce; DuPrie, Kimberly; Mink, Jessica; Nemiroff, Robert; Ryan, PW; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Wallin, John; Warmels, Rein

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) was founded in 1999 by Robert Nemiroff and John Wallin. ASCL editors seek both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and add entries for the found codes to the library. Software authors can submit their codes to the ASCL as well. This ensures a comprehensive listing covering a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL is indexed by both NASA’s Astrophysics Data System (ADS) and Web of Science, making software used in research more discoverable. This presentation covers the growth in the ASCL’s number of entries, the number of citations to its entries, and in which journals those citations appear. It also discusses what changes have been made to the ASCL recently, and what its plans are for the future.

  2. Data processing with microcode designed with source coding

    Science.gov (United States)

    McCoy, James A; Morrison, Steven E

    2013-05-07

    Programming for a data processor to execute a data processing application is provided using microcode source code. The microcode source code is assembled to produce microcode that includes digital microcode instructions with which to signal the data processor to execute the data processing application.

  3. Present state of the SOURCES computer code

    International Nuclear Information System (INIS)

    Shores, Erik F.

    2002-01-01

    In various stages of development for over two decades, the SOURCES computer code continues to calculate neutron production rates and spectra from four types of problems: homogeneous media, two-region interfaces, three-region interfaces and that of a monoenergetic alpha particle beam incident on a slab of target material. Graduate work at the University of Missouri - Rolla, in addition to user feedback from a tutorial course, provided the impetus for a variety of code improvements. Recently upgraded to version 4B, initial modifications to SOURCES focused on updates to the 'tape5' decay data library. Shortly thereafter, efforts focused on development of a graphical user interface for the code. This paper documents the Los Alamos SOURCES Tape1 Creator and Library Link (LASTCALL) and describes additional library modifications in more detail. Minor improvements and planned enhancements are discussed.

  4. International Test Comparisons: Reviewing Translation Error in Different Source Language-Target Language Combinations

    Science.gov (United States)

    Zhao, Xueyu; Solano-Flores, Guillermo; Qian, Ming

    2018-01-01

    This article addresses test translation review in international test comparisons. We investigated the applicability of the theory of test translation error--a theory of the multidimensionality and inevitability of test translation error--across source language-target language combinations in the translation of PISA (Programme of International…

  5. Schroedinger’s Code: A Preliminary Study on Research Source Code Availability and Link Persistence in Astrophysics

    Science.gov (United States)

    Allen, Alice; Teuben, Peter J.; Ryan, P. Wesley

    2018-05-01

    We examined software usage in a sample set of astrophysics research articles published in 2015 and searched for the source codes for the software mentioned in these research papers. We categorized the software to indicate whether the source code is available for download and whether there are restrictions to accessing it, and if the source code is not available, whether some other form of the software, such as a binary, is. We also extracted hyperlinks from one journal’s 2015 research articles, as links in articles can serve as an acknowledgment of software use and lead to the data used in the research, and tested them to determine which of these URLs are still accessible. For our sample of 715 software instances in the 166 articles we examined, we were able to categorize 418 records as according to whether source code was available and found that 285 unique codes were used, 58% of which offered the source code for download. Of the 2558 hyperlinks extracted from 1669 research articles, at best, 90% of them were available over our testing period.

  6. Translational illusion of acoustic sources by transformation acoustics.

    Science.gov (United States)

    Sun, Fei; Li, Shichao; He, Sailing

    2017-09-01

    An acoustic illusion of creating a translated acoustic source is designed by utilizing transformation acoustics. An acoustic source shifter (ASS) composed of layered acoustic metamaterials is designed to achieve such an illusion. A practical example where the ASS is made with naturally available materials is also given. Numerical simulations verify the performance of the proposed device. The designed ASS may have some applications in, e.g., anti-sonar detection.

  7. Iterative List Decoding of Concatenated Source-Channel Codes

    Directory of Open Access Journals (Sweden)

    Hedayat Ahmadreza

    2005-01-01

    Full Text Available Whenever variable-length entropy codes are used in the presence of a noisy channel, any channel errors will propagate and cause significant harm. Despite using channel codes, some residual errors always remain, whose effect will get magnified by error propagation. Mitigating this undesirable effect is of great practical interest. One approach is to use the residual redundancy of variable length codes for joint source-channel decoding. In this paper, we improve the performance of residual redundancy source-channel decoding via an iterative list decoder made possible by a nonbinary outer CRC code. We show that the list decoding of VLC's is beneficial for entropy codes that contain redundancy. Such codes are used in state-of-the-art video coders, for example. The proposed list decoder improves the overall performance significantly in AWGN and fully interleaved Rayleigh fading channels.

  8. Evidence of translation efficiency adaptation of the coding regions of the bacteriophage lambda.

    Science.gov (United States)

    Goz, Eli; Mioduser, Oriah; Diament, Alon; Tuller, Tamir

    2017-08-01

    Deciphering the way gene expression regulatory aspects are encoded in viral genomes is a challenging mission with ramifications related to all biomedical disciplines. Here, we aimed to understand how the evolution shapes the bacteriophage lambda genes by performing a high resolution analysis of ribosomal profiling data and gene expression related synonymous/silent information encoded in bacteriophage coding regions.We demonstrated evidence of selection for distinct compositions of synonymous codons in early and late viral genes related to the adaptation of translation efficiency to different bacteriophage developmental stages. Specifically, we showed that evolution of viral coding regions is driven, among others, by selection for codons with higher decoding rates; during the initial/progressive stages of infection the decoding rates in early/late genes were found to be superior to those in late/early genes, respectively. Moreover, we argued that selection for translation efficiency could be partially explained by adaptation to Escherichia coli tRNA pool and the fact that it can change during the bacteriophage life cycle.An analysis of additional aspects related to the expression of viral genes, such as mRNA folding and more complex/longer regulatory signals in the coding regions, is also reported. The reported conclusions are likely to be relevant also to additional viruses. © The Author 2017. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  9. A translator writing system for microcomputer high-level languages and assemblers

    Science.gov (United States)

    Collins, W. R.; Knight, J. C.; Noonan, R. E.

    1980-01-01

    In order to implement high level languages whenever possible, a translator writing system of advanced design was developed. It is intended for routine production use by many programmers working on different projects. As well as a fairly conventional parser generator, it includes a system for the rapid generation of table driven code generators. The parser generator was developed from a prototype version. The translator writing system includes various tools for the management of the source text of a compiler under construction. In addition, it supplies various default source code sections so that its output is always compilable and executable. The system thereby encourages iterative enhancement as a development methodology by ensuring an executable program from the earliest stages of a compiler development project. The translator writing system includes PASCAL/48 compiler, three assemblers, and two compilers for a subset of HAL/S.

  10. Measuring Modularity in Open Source Code Bases

    Directory of Open Access Journals (Sweden)

    Roberto Milev

    2009-03-01

    Full Text Available Modularity of an open source software code base has been associated with growth of the software development community, the incentives for voluntary code contribution, and a reduction in the number of users who take code without contributing back to the community. As a theoretical construct, modularity links OSS to other domains of research, including organization theory, the economics of industry structure, and new product development. However, measuring the modularity of an OSS design has proven difficult, especially for large and complex systems. In this article, we describe some preliminary results of recent research at Carleton University that examines the evolving modularity of large-scale software systems. We describe a measurement method and a new modularity metric for comparing code bases of different size, introduce an open source toolkit that implements this method and metric, and provide an analysis of the evolution of the Apache Tomcat application server as an illustrative example of the insights gained from this approach. Although these results are preliminary, they open the door to further cross-discipline research that quantitatively links the concerns of business managers, entrepreneurs, policy-makers, and open source software developers.

  11. Optimization of Coding of AR Sources for Transmission Across Channels with Loss

    DEFF Research Database (Denmark)

    Arildsen, Thomas

    Source coding concerns the representation of information in a source signal using as few bits as possible. In the case of lossy source coding, it is the encoding of a source signal using the fewest possible bits at a given distortion or, at the lowest possible distortion given a specified bit rate....... Channel coding is usually applied in combination with source coding to ensure reliable transmission of the (source coded) information at the maximal rate across a channel given the properties of this channel. In this thesis, we consider the coding of auto-regressive (AR) sources which are sources that can...... compared to the case where the encoder is unaware of channel loss. We finally provide an extensive overview of cross-layer communication issues which are important to consider due to the fact that the proposed algorithm interacts with the source coding and exploits channel-related information typically...

  12. More or Less on the Mark? Translating Harold Pinter’s The Dwarfs: A Novel

    Directory of Open Access Journals (Sweden)

    Łukasz Borowiec

    2012-06-01

    Full Text Available A literary source text demands the translator’s approach for each process of translation. This approach involves a complex and multifaceted analysis of the source text. As Pinter’s novel The Dwarfs provides rich ground for such analysis, I present a selection of translation issues against the backdrop of a more general problem of translatability. Pinter is a master of English dialogue, which makes its translation a truly daunting task. The conversations between the characters are filled with expressions from cricket, dated British cultural references, puns, literary and Biblical allusions, phrases and formulaic expressions characteristic of Cockney, and numerous allusions to Shakespeare as well as his own earlier plays. I examine the translatability of The Dwarfs by discussing three translation codes: lexical-semantic, cultural and esthetic. Although these are closely interconnected and interdependent, I present a choice of issues within each code in order to submit for consideration the challenges facing a Pinter translator as well as to show the complexity of Pinter’s artistic vision in one of his earliest works.

  13. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  14. Blahut-Arimoto algorithm and code design for action-dependent source coding problems

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Simeone, Osvaldo; Popovski, Petar

    2013-01-01

    The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient Blahut-Arimoto-type algorithm for the numerical computation of the rate-distortion-cost function...... for this problem is proposed. Moreover, a simplified two-stage code structure based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical Wyner-Ziv codes, one for each action. Leveraging this structure, specific coding/decoding...... strategies are designed based on LDGM codes and message passing. Through numerical examples, the proposed code design is shown to achieve performance close to the rate-distortion-cost function....

  15. Translational Creativity

    DEFF Research Database (Denmark)

    Nielsen, Sandro

    2010-01-01

    A long-established approach to legal translation focuses on terminological equivalence making translators strictly follow the words of source texts. Recent research suggests that there is room for some creativity allowing translators to deviate from the source texts. However, little attention...... is given to genre conventions in source texts and the ways in which they can best be translated. I propose that translators of statutes with an informative function in expert-to-expert communication may be allowed limited translational creativity when translating specific types of genre convention....... This creativity is a result of translators adopting either a source-language or a target-language oriented strategy and is limited by the pragmatic principle of co-operation. Examples of translation options are provided illustrating the different results in target texts. The use of a target-language oriented...

  16. Towards Holography via Quantum Source-Channel Codes

    Science.gov (United States)

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-01

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  17. Are translations longer than source texts? A corpus-based study of explicitation

    OpenAIRE

    Frankenberg-Garcia, A

    2009-01-01

    Explicitation is the process of rendering information which is only implicit in the source text explicit in the target text, and is believed to be one of the universals of translation (Blum-Kulka 1986, Olohan and Baker 2000, Øverås 1998, Séguinot 1988, Vanderauwera 1985). The present study uses corpus technology to attempt to shed some light on the complex relationship between translation, text length and explicitation. An awareness of what makes translations longer (or shorter) and more expl...

  18. SALT [System Analysis Language Translater]: A steady state and dynamic systems code

    International Nuclear Information System (INIS)

    Berry, G.; Geyer, H.

    1983-01-01

    SALT (System Analysis Language Translater) is a lumped parameter approach to system analysis which is totally modular. The modules are all precompiled and only the main program, which is generated by SALT, needs to be compiled for each unique system configuration. This is a departure from other lumped parameter codes where all models are written by MACROS and then compiled for each unique configuration, usually after all of the models are lumped together and sorted to eliminate undetermined variables. The SALT code contains a robust and sophisticated steady-sate finder (non-linear equation solver), optimization capability and enhanced GEAR integration scheme which makes use of sparsity and algebraic constraints. The SALT systems code has been used for various technologies. The code was originally developed for open-cycle magnetohydrodynamic (MHD) systems. It was easily extended to liquid metal MHD systems by simply adding the appropriate models and property libraries. Similarly, the model and property libraries were expanded to handle fuel cell systems, flue gas desulfurization systems, combined cycle gasification systems, fluidized bed combustion systems, ocean thermal energy conversion systems, geothermal systems, nuclear systems, and conventional coal-fired power plants. Obviously, the SALT systems code is extremely flexible to be able to handle all of these diverse systems. At present, the dynamic option has only been used for LMFBR nuclear power plants and geothermal power plants. However, it can easily be extended to other systems and can be used for analyzing control problems. 12 refs

  19. OSSMETER D3.4 – Language-Specific Source Code Quality Analysis

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); A. Shahi (Ashim); H.J.S. Basten (Bas)

    2014-01-01

    htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and prototypes of the tools that are needed for source code quality analysis in open source software projects. It builds upon the results of: • Deliverable 3.1 where infra-structure and

  20. Allele-Selective Transcriptome Recruitment to Polysomes Primed for Translation: Protein-Coding and Noncoding RNAs, and RNA Isoforms.

    Directory of Open Access Journals (Sweden)

    Roshan Mascarenhas

    Full Text Available mRNA translation into proteins is highly regulated, but the role of mRNA isoforms, noncoding RNAs (ncRNAs, and genetic variants remains poorly understood. mRNA levels on polysomes have been shown to correlate well with expressed protein levels, pointing to polysomal loading as a critical factor. To study regulation and genetic factors of protein translation we measured levels and allelic ratios of mRNAs and ncRNAs (including microRNAs in lymphoblast cell lines (LCL and in polysomal fractions. We first used targeted assays to measure polysomal loading of mRNA alleles, confirming reported genetic effects on translation of OPRM1 and NAT1, and detecting no effect of rs1045642 (3435C>T in ABCB1 (MDR1 on polysomal loading while supporting previous results showing increased mRNA turnover of the 3435T allele. Use of high-throughput sequencing of complete transcript profiles (RNA-Seq in three LCLs revealed significant differences in polysomal loading of individual RNA classes and isoforms. Correlated polysomal distribution between protein-coding and non-coding RNAs suggests interactions between them. Allele-selective polysome recruitment revealed strong genetic influence for multiple RNAs, attributable either to differential expression of RNA isoforms or to differential loading onto polysomes, the latter defining a direct genetic effect on translation. Genes identified by different allelic RNA ratios between cytosol and polysomes were enriched with published expression quantitative trait loci (eQTLs affecting RNA functions, and associations with clinical phenotypes. Polysomal RNA-Seq combined with allelic ratio analysis provides a powerful approach to study polysomal RNA recruitment and regulatory variants affecting protein translation.

  1. An efficient chaotic source coding scheme with variable-length blocks

    International Nuclear Information System (INIS)

    Lin Qiu-Zhen; Wong Kwok-Wo; Chen Jian-Yong

    2011-01-01

    An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding. (general)

  2. An Open-Source Web-Based Tool for Resource-Agnostic Interactive Translation Prediction

    Directory of Open Access Journals (Sweden)

    Daniel Torregrosa

    2014-09-01

    Full Text Available We present a web-based open-source tool for interactive translation prediction (ITP and describe its underlying architecture. ITP systems assist human translators by making context-based computer-generated suggestions as they type. Most of the ITP systems in literature are strongly coupled with a statistical machine translation system that is conveniently adapted to provide the suggestions. Our system, however, follows a resource-agnostic approach and suggestions are obtained from any unmodified black-box bilingual resource. This paper reviews our ITP method and describes the architecture of Forecat, a web tool, partly based on the recent technology of web components, that eases the use of our ITP approach in any web application requiring this kind of translation assistance. We also evaluate the performance of our method when using an unmodified Moses-based statistical machine translation system as the bilingual resource.

  3. An Efficient SF-ISF Approach for the Slepian-Wolf Source Coding Problem

    Directory of Open Access Journals (Sweden)

    Tu Zhenyu

    2005-01-01

    Full Text Available A simple but powerful scheme exploiting the binning concept for asymmetric lossless distributed source coding is proposed. The novelty in the proposed scheme is the introduction of a syndrome former (SF in the source encoder and an inverse syndrome former (ISF in the source decoder to efficiently exploit an existing linear channel code without the need to modify the code structure or the decoding strategy. For most channel codes, the construction of SF-ISF pairs is a light task. For parallelly and serially concatenated codes and particularly parallel and serial turbo codes where this appear less obvious, an efficient way for constructing linear complexity SF-ISF pairs is demonstrated. It is shown that the proposed SF-ISF approach is simple, provenly optimal, and generally applicable to any linear channel code. Simulation using conventional and asymmetric turbo codes demonstrates a compression rate that is only 0.06 bit/symbol from the theoretical limit, which is among the best results reported so far.

  4. Code of conduct on the safety and security of radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-01-01

    The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories.

  5. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    2004-01-01

    The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories

  6. Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN

    Science.gov (United States)

    Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.

    2013-12-01

    Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third

  7. OSSMETER D3.2 – Report on Source Code Activity Metrics

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); A. Shahi (Ashim)

    2014-01-01

    htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and initial prototypes of the tools that are needed for source code activity analysis. It builds upon the Deliverable 3.1 where infra-structure and a domain analysis have been

  8. Java Source Code Analysis for API Migration to Embedded Systems

    Energy Technology Data Exchange (ETDEWEB)

    Winter, Victor [Univ. of Nebraska, Omaha, NE (United States); McCoy, James A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guerrero, Jonathan [Univ. of Nebraska, Omaha, NE (United States); Reinke, Carl Werner [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Perry, James Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered by APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.

  9. Joint source/channel coding of scalable video over noisy channels

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, G.; Zakhor, A. [Department of Electrical Engineering and Computer Sciences University of California Berkeley, California94720 (United States)

    1997-01-01

    We propose an optimal bit allocation strategy for a joint source/channel video codec over noisy channel when the channel state is assumed to be known. Our approach is to partition source and channel coding bits in such a way that the expected distortion is minimized. The particular source coding algorithm we use is rate scalable and is based on 3D subband coding with multi-rate quantization. We show that using this strategy, transmission of video over very noisy channels still renders acceptable visual quality, and outperforms schemes that use equal error protection only. The flexibility of the algorithm also permits the bit allocation to be selected optimally when the channel state is in the form of a probability distribution instead of a deterministic state. {copyright} {ital 1997 American Institute of Physics.}

  10. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Fossorier Marc

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope -ary phase shift key ( -PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded -PSK signaling (with . Then, it is extended to include coded -PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded -PSK signaling performs 3.1 to 5.2 dB better than uncoded -PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  11. Word translation entropy in translation

    DEFF Research Database (Denmark)

    Schaeffer, Moritz; Dragsted, Barbara; Hvelplund, Kristian Tangsgaard

    2016-01-01

    This study reports on an investigation into the relationship between the number of translation alternatives for a single word and eye movements on the source text. In addition, the effect of word order differences between source and target text on eye movements on the source text is studied....... In particular, the current study investigates the effect of these variables on early and late eye movement measures. Early eye movement measures are indicative of processes that are more automatic while late measures are more indicative of conscious processing. Most studies that found evidence of target...... language activation during source text reading in translation, i.e. co-activation of the two linguistic systems, employed late eye movement measures or reaction times. The current study therefore aims to investigate if and to what extent earlier eye movement measures in reading for translation show...

  12. Genetic Code Analysis Toolkit: A novel tool to explore the coding properties of the genetic code and DNA sequences

    Science.gov (United States)

    Kraljić, K.; Strüngmann, L.; Fimmel, E.; Gumbel, M.

    2018-01-01

    The genetic code is degenerated and it is assumed that redundancy provides error detection and correction mechanisms in the translation process. However, the biological meaning of the code's structure is still under current research. This paper presents a Genetic Code Analysis Toolkit (GCAT) which provides workflows and algorithms for the analysis of the structure of nucleotide sequences. In particular, sets or sequences of codons can be transformed and tested for circularity, comma-freeness, dichotomic partitions and others. GCAT comes with a fertile editor custom-built to work with the genetic code and a batch mode for multi-sequence processing. With the ability to read FASTA files or load sequences from GenBank, the tool can be used for the mathematical and statistical analysis of existing sequence data. GCAT is Java-based and provides a plug-in concept for extensibility. Availability: Open source Homepage:http://www.gcat.bio/

  13. The Fortran-P Translator: Towards Automatic Translation of Fortran 77 Programs for Massively Parallel Processors

    Directory of Open Access Journals (Sweden)

    Matthew O'keefe

    1995-01-01

    Full Text Available Massively parallel processors (MPPs hold the promise of extremely high performance that, if realized, could be used to study problems of unprecedented size and complexity. One of the primary stumbling blocks to this promise has been the lack of tools to translate application codes to MPP form. In this article we show how applications codes written in a subset of Fortran 77, called Fortran-P, can be translated to achieve good performance on several massively parallel machines. This subset can express codes that are self-similar, where the algorithm applied to the global data domain is also applied to each subdomain. We have found many codes that match the Fortran-P programming style and have converted them using our tools. We believe a self-similar coding style will accomplish what a vectorizable style has accomplished for vector machines by allowing the construction of robust, user-friendly, automatic translation systems that increase programmer productivity and generate fast, efficient code for MPPs.

  14. Source Coding for Wireless Distributed Microphones in Reverberant Environments

    DEFF Research Database (Denmark)

    Zahedi, Adel

    2016-01-01

    . However, it comes with the price of several challenges, including the limited power and bandwidth resources for wireless transmission of audio recordings. In such a setup, we study the problem of source coding for the compression of the audio recordings before the transmission in order to reduce the power...... consumption and/or transmission bandwidth by reduction in the transmission rates. Source coding for wireless microphones in reverberant environments has several special characteristics which make it more challenging in comparison with regular audio coding. The signals which are acquired by the microphones......Modern multimedia systems are more and more shifting toward distributed and networked structures. This includes audio systems, where networks of wireless distributed microphones are replacing the traditional microphone arrays. This allows for flexibility of placement and high spatial diversity...

  15. Asymmetric Joint Source-Channel Coding for Correlated Sources with Blind HMM Estimation at the Receiver

    Directory of Open Access Journals (Sweden)

    Ser Javier Del

    2005-01-01

    Full Text Available We consider the case of two correlated sources, and . The correlation between them has memory, and it is modelled by a hidden Markov chain. The paper studies the problem of reliable communication of the information sent by the source over an additive white Gaussian noise (AWGN channel when the output of the other source is available as side information at the receiver. We assume that the receiver has no a priori knowledge of the correlation statistics between the sources. In particular, we propose the use of a turbo code for joint source-channel coding of the source . The joint decoder uses an iterative scheme where the unknown parameters of the correlation model are estimated jointly within the decoding process. It is shown that reliable communication is possible at signal-to-noise ratios close to the theoretical limits set by the combination of Shannon and Slepian-Wolf theorems.

  16. Improving Utility of GPU in Accelerating Industrial Applications with User-centred Automatic Code Translation

    DEFF Research Database (Denmark)

    Yang, Po; Dong, Feng; Codreanu, Valeriu

    2018-01-01

    design and hard-to-use. Little attentions have been paid to the applicability, usability and learnability of these tools for normal users. In this paper, we present an online automated CPU-to-GPU source translation system, (GPSME) for inexperienced users to utilize GPU capability in accelerating general...... SME applications. This system designs and implements a directive programming model with new kernel generation scheme and memory management hierarchy to optimize its performance. A web service interface is designed for inexperienced users to easily and flexibly invoke the automatic resource translator...

  17. Translation Initiation from Conserved Non-AUG Codons Provides Additional Layers of Regulation and Coding Capacity

    Directory of Open Access Journals (Sweden)

    Ivaylo P. Ivanov

    2017-06-01

    Full Text Available Neurospora crassa cpc-1 and Saccharomyces cerevisiae GCN4 are homologs specifying transcription activators that drive the transcriptional response to amino acid limitation. The cpc-1 mRNA contains two upstream open reading frames (uORFs in its >700-nucleotide (nt 5′ leader, and its expression is controlled at the level of translation in response to amino acid starvation. We used N. crassa cell extracts and obtained data indicating that cpc-1 uORF1 and uORF2 are functionally analogous to GCN4 uORF1 and uORF4, respectively, in controlling translation. We also found that the 5′ region upstream of the main coding sequence of the cpc-1 mRNA extends for more than 700 nucleotides without any in-frame stop codon. For 100 cpc-1 homologs from Pezizomycotina and from selected Basidiomycota, 5′ conserved extensions of the CPC1 reading frame are also observed. Multiple non-AUG near-cognate codons (NCCs in the CPC1 reading frame upstream of uORF2, some deeply conserved, could potentially initiate translation. At least four NCCs initiated translation in vitro. In vivo data were consistent with initiation at NCCs to produce N-terminally extended N. crassa CPC1 isoforms. The pivotal role played by CPC1, combined with its translational regulation by uORFs and NCC utilization, underscores the emerging significance of noncanonical initiation events in controlling gene expression.

  18. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Marc Fossorier

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope M-ary phase shift key (M-PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded M-PSK signaling (with M=2k. Then, it is extended to include coded M-PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded M-PSK signaling performs 3.1 to 5.2 dB better than uncoded M-PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  19. Comparison of DT neutron production codes MCUNED, ENEA-JSI source subroutine and DDT

    Energy Technology Data Exchange (ETDEWEB)

    Čufar, Aljaž, E-mail: aljaz.cufar@ijs.si [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Lengar, Igor; Kodeli, Ivan [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Milocco, Alberto [Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom); Sauvan, Patrick [Departamento de Ingeniería Energética, E.T.S. Ingenieros Industriales, UNED, C/Juan del Rosal 12, 28040 Madrid (Spain); Conroy, Sean [VR Association, Uppsala University, Department of Physics and Astronomy, PO Box 516, SE-75120 Uppsala (Sweden); Snoj, Luka [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia)

    2016-11-01

    Highlights: • Results of three codes capable of simulating the accelerator based DT neutron generators were compared on a simple model where only a thin target made of mixture of titanium and tritium is present. Two typical deuteron beam energies, 100 keV and 250 keV, were used in the comparison. • Comparisons of the angular dependence of the total neutron flux and spectrum as well as the neutron spectrum of all the neutrons emitted from the target show general agreement of the results but also some noticeable differences. • A comparison of figures of merit of the calculations using different codes showed that the computational time necessary to achieve the same statistical uncertainty can vary for more than 30× when different codes for the simulation of the DT neutron generator are used. - Abstract: As the DT fusion reaction produces neutrons with energies significantly higher than in fission reactors, special fusion-relevant benchmark experiments are often performed using DT neutron generators. However, commonly used Monte Carlo particle transport codes such as MCNP or TRIPOLI cannot be directly used to analyze these experiments since they do not have the capabilities to model the production of DT neutrons. Three of the available approaches to model the DT neutron generator source are the MCUNED code, the ENEA-JSI DT source subroutine and the DDT code. The MCUNED code is an extension of the well-established and validated MCNPX Monte Carlo code. The ENEA-JSI source subroutine was originally prepared for the modelling of the FNG experiments using different versions of the MCNP code (−4, −5, −X) and was later extended to allow the modelling of both DT and DD neutron sources. The DDT code prepares the DT source definition file (SDEF card in MCNP) which can then be used in different versions of the MCNP code. In the paper the methods for the simulation of the DT neutron production used in the codes are briefly described and compared for the case of a

  20. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    Directory of Open Access Journals (Sweden)

    Marinkovic Slavica

    2006-01-01

    Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  1. Distributed Remote Vector Gaussian Source Coding for Wireless Acoustic Sensor Networks

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider the problem of remote vector Gaussian source coding for a wireless acoustic sensor network. Each node receives messages from multiple nodes in the network and decodes these messages using its own measurement of the sound field as side information. The node’s measurement...... and the estimates of the source resulting from decoding the received messages are then jointly encoded and transmitted to a neighboring node in the network. We show that for this distributed source coding scenario, one can encode a so-called conditional sufficient statistic of the sources instead of jointly...

  2. Literal Translation using Google Translate in Translating the Text from French to English in Digital Tourism Brochure “Bienvenue À Paris”

    Directory of Open Access Journals (Sweden)

    Rila Hilma

    2011-05-01

    Full Text Available Translation is basically change of form. The form from which the translation is made will be called the source language and the form into which it is to be changed will be called the receptor language. Translation consists of transferring the meaning of the source language into the receptor language. Translating is not an easy job to do because many things to be considered to do this activity because translation means determining the meaning of a text, then reconstructing this same meaning using the appropriate structure and form in the receptor language. Translation is basically divided by two types of translation, one is literal and the other is idiomatic. Literal translation is really strict to the structure and form then often can not well express the true meaning of source language. Idiomatic translation makes every effort to communicate the meaning of the source language text in the natural forms of the receptor language. Then the most popular translation machine, Google Translate, in this study shows the results of translation which remain odd, unnatural, and nonsensical because the unsuccessful of message delivery, which is notably the typically error of literal translation.

  3. Test of Effective Solid Angle code for the efficiency calculation of volume source

    Energy Technology Data Exchange (ETDEWEB)

    Kang, M. Y.; Kim, J. H.; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of); Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    It is hard to determine a full energy (FE) absorption peak efficiency curve for an arbitrary volume source by experiment. That's why the simulation and semi-empirical methods have been preferred so far, and many works have progressed in various ways. Moens et al. determined the concept of effective solid angle by considering an attenuation effect of γ-rays in source, media and detector. This concept is based on a semi-empirical method. An Effective Solid Angle code (ESA code) has been developed for years by the Applied Nuclear Physics Group in Seoul National University. ESA code converts an experimental FE efficiency curve determined by using a standard point source to that for a volume source. To test the performance of ESA Code, we measured the point standard sources and voluminous certified reference material (CRM) sources of γ-ray, and compared with efficiency curves obtained in this study. 200∼1500 KeV energy region is fitted well. NIST X-ray mass attenuation coefficient data is used currently to check for the effect of linear attenuation only. We will use the interaction cross-section data obtained from XCOM code to check the each contributing factor like photoelectric effect, incoherent scattering and coherent scattering in the future. In order to minimize the calculation time and code simplification, optimization of algorithm is needed.

  4. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Directory of Open Access Journals (Sweden)

    Pierre Siohan

    2005-05-01

    Full Text Available Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC and variable-length source codes (VLC widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  5. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Science.gov (United States)

    Guillemot, Christine; Siohan, Pierre

    2005-12-01

    Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  6. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  7. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    Science.gov (United States)

    Kermek, Dragutin; Novak, Matija

    2016-01-01

    In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…

  8. THE BIOGRAPHIES OF ROMAN EMPERORS BY KOŽIČIĆ. FAITHFULNESS TO THE SOURCE AND ORIGINALITY IN TRANSLATING

    Directory of Open Access Journals (Sweden)

    Tomislav Mrkonjić

    2012-01-01

    Full Text Available The question relating to the dependence on the source, namely, the matter concerning the originality of the translation of Kožičić’s Biographies (Knjižice od žitija rimskih arhijerejov i cesarov, Rijeka, 1531 was resolved only partially by Günther Tutschke for the reason that he didn’t know the source of the Biographies of the Roman emperors. Since the author of this article has ascertained that the biographical source on the emperors was the work of Giovanni Battista Cipelli (Egnatius, De Caesaribus libri tres, in this article he wishes to present the context in which the translations were produced, compare the structure of the source with the translation, juxtapose the texts known as “Excursus” (De Parthorum et Persarum imperio, Romae captivitas, Maomethis ortus, De origine Turcarum as well as compare individual biographies. The conclusion is that Kožičić demonstrated a certain level of originality in his translations regarding the structure, the selection and elaboration of the “Excursus”, and in converting particular terminology. In many instances he integrated the biographies with the information regarding the local history, especially concerning Croats and southern Slavs.

  9. Code of conduct on the safety and security of radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost.

  10. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    2001-03-01

    The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost

  11. The Astrophysics Source Code Library: Supporting software publication and citation

    Science.gov (United States)

    Allen, Alice; Teuben, Peter

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net), established in 1999, is a free online registry for source codes used in research that has appeared in, or been submitted to, peer-reviewed publications. The ASCL is indexed by the SAO/NASA Astrophysics Data System (ADS) and Web of Science and is citable by using the unique ascl ID assigned to each code. In addition to registering codes, the ASCL can house archive files for download and assign them DOIs. The ASCL advocations for software citation on par with article citation, participates in multidiscipinary events such as Force11, OpenCon, and the annual Workshop on Sustainable Software for Science, works with journal publishers, and organizes Special Sessions and Birds of a Feather meetings at national and international conferences such as Astronomical Data Analysis Software and Systems (ADASS), European Week of Astronomy and Space Science, and AAS meetings. In this presentation, I will discuss some of the challenges of gathering credit for publishing software and ideas and efforts from other disciplines that may be useful to astronomy.

  12. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  13. Remodularizing Java Programs for Improved Locality of Feature Implementations in Source Code

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2011-01-01

    Explicit traceability between features and source code is known to help programmers to understand and modify programs during maintenance tasks. However, the complex relations between features and their implementations are not evident from the source code of object-oriented Java programs....... Consequently, the implementations of individual features are difficult to locate, comprehend, and modify in isolation. In this paper, we present a novel remodularization approach that improves the representation of features in the source code of Java programs. Both forward- and reverse restructurings...... are supported through on-demand bidirectional restructuring between feature-oriented and object-oriented decompositions. The approach includes a feature location phase based of tracing program execution, a feature representation phase that reallocates classes into a new package structure based on single...

  14. The Mystro system: A comprehensive translator toolkit

    Science.gov (United States)

    Collins, W. R.; Noonan, R. E.

    1985-01-01

    Mystro is a system that facilities the construction of compilers, assemblers, code generators, query interpretors, and similar programs. It provides features to encourage the use of iterative enhancement. Mystro was developed in response to the needs of NASA Langley Research Center (LaRC) and enjoys a number of advantages over similar systems. There are other programs available that can be used in building translators. These typically build parser tables, usually supply the source of a parser and parts of a lexical analyzer, but provide little or no aid for code generation. In general, only the front end of the compiler is addressed. Mystro, on the other hand, emphasizes tools for both ends of a compiler.

  15. Distributed coding of multiview sparse sources with joint recovery

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Deligiannis, Nikos; Forchhammer, Søren

    2016-01-01

    In support of applications involving multiview sources in distributed object recognition using lightweight cameras, we propose a new method for the distributed coding of sparse sources as visual descriptor histograms extracted from multiview images. The problem is challenging due to the computati...... transform (SIFT) descriptors extracted from multiview images shows that our method leads to bit-rate saving of up to 43% compared to the state-of-the-art distributed compressed sensing method with independent encoding of the sources....

  16. Translation modalities: an investigation of the translated short story “Dez de dezembro”

    Directory of Open Access Journals (Sweden)

    Clara Peron da Silva Guedes

    2017-05-01

    Full Text Available During the translation process translators adopt linguistic strategies in order to make decisions that help to render a translated text suitable to the target language and culture. The translation modalities proposed by Aubert (105-10 constitute a tool that enables one to identify some of these strategies. In addition, they permit to measure the level of linguistic differentiation between a source text and a target text verifying the distance or the proximity of the target text to the linguistic and cultural issues of the source text. Thus, this paper aims to investigate the translation modalities in the short story “Dez de dezembro” (Saunders 204-38, a translation of the short story “Tenth of December” (Saunders 215-51. For quantifying the translation modalities in the translated text the noun phrases from the source text were selected and their counterparts in the target text were classified and annotated within Notepad++ software. The most recurrent translation modalities in the corpus were Literal Translation and Transposition, categories considered intermediate ones in the rank proposed by Aubert (105-10. Therefore, a relation of equivalence can be established between the target and the source texts.

  17. Revised IAEA Code of Conduct on the Safety and Security of Radioactive Sources

    International Nuclear Information System (INIS)

    Wheatley, J. S.

    2004-01-01

    The revised Code of Conduct on the Safety and Security of Radioactive Sources is aimed primarily at Governments, with the objective of achieving and maintaining a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations; and through the fostering of international co-operation. It focuses on sealed radioactive sources and provides guidance on legislation, regulations and the regulatory body, and import/export controls. Nuclear materials (except for sources containing 239Pu), as defined in the Convention on the Physical Protection of Nuclear Materials, are not covered by the revised Code, nor are radioactive sources within military or defence programmes. An earlier version of the Code was published by IAEA in 2001. At that time, agreement was not reached on a number of issues, notably those relating to the creation of comprehensive national registries for radioactive sources, obligations of States exporting radioactive sources, and the possibility of unilateral declarations of support. The need to further consider these and other issues was highlighted by the events of 11th September 2001. Since then, the IAEA's Secretariat has been working closely with Member States and relevant International Organizations to achieve consensus. The text of the revised Code was finalized at a meeting of technical and legal experts in August 2003, and it was submitted to IAEA's Board of Governors for approval in September 2003, with a recommendation that the IAEA General Conference adopt it and encourage its wide implementation. The IAEA General Conference, in September 2003, endorsed the revised Code and urged States to work towards following the guidance contained within it. This paper summarizes the history behind the revised Code, its content and the outcome of the discussions within the IAEA Board of Governors and General Conference. (Author) 8 refs

  18. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    Anon.

    2001-01-01

    The objective of the code of conduct is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through the fostering of international co-operation. In particular, this code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost. (N.C.)

  19. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks.

    Science.gov (United States)

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-07-09

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption.

  20. Distributed Remote Vector Gaussian Source Coding with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider a distributed remote source coding problem, where a sequence of observations of source vectors is available at the encoder. The problem is to specify the optimal rate for encoding the observations subject to a covariance matrix distortion constraint and in the presence...

  1. Towards a Finer-Grained Classification of Translation Styles Based on Eye-Tracking, Key-Logging and RTP Data

    DEFF Research Database (Denmark)

    Feng, Jia; Carl, Michael

    This research endeavors to reach a finer-grained classification of translation styles based on observations of Translation Progression Graphs that integrate translation process data and translation product data. Translation styles are first coded based on the findings and classification of Jakobsen...... for the translation tasks. Each translation task is immediately followed by a retrospective protocol with the eye-tracking replay as the cue. We are also interested to see whether translation directionality and source text difficulty would have an impact on translation styles. We try to explore 1) the translation...... styles in terms of different ways of allocating attention to the three phases of translation process, 2) the translation styles in the orientation phase, 3) the translation styles in the drafting phase, with a special focus on online-planning, backtracking, online-revision, as well as the distribution...

  2. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes

    International Nuclear Information System (INIS)

    Etienne, Zachariah B; Paschalidis, Vasileios; Haas, Roland; Mösta, Philipp; Shapiro, Stuart L

    2015-01-01

    In the extreme violence of merger and mass accretion, compact objects like black holes and neutron stars are thought to launch some of the most luminous outbursts of electromagnetic and gravitational wave energy in the Universe. Modeling these systems realistically is a central problem in theoretical astrophysics, but has proven extremely challenging, requiring the development of numerical relativity codes that solve Einstein's equations for the spacetime, coupled to the equations of general relativistic (ideal) magnetohydrodynamics (GRMHD) for the magnetized fluids. Over the past decade, the Illinois numerical relativity (ILNR) group's dynamical spacetime GRMHD code has proven itself as a robust and reliable tool for theoretical modeling of such GRMHD phenomena. However, the code was written ‘by experts and for experts’ of the code, with a steep learning curve that would severely hinder community adoption if it were open-sourced. Here we present IllinoisGRMHD, which is an open-source, highly extensible rewrite of the original closed-source GRMHD code of the ILNR group. Reducing the learning curve was the primary focus of this rewrite, with the goal of facilitating community involvement in the code's use and development, as well as the minimization of human effort in generating new science. IllinoisGRMHD also saves computer time, generating roundoff-precision identical output to the original code on adaptive-mesh grids, but nearly twice as fast at scales of hundreds to thousands of cores. (paper)

  3. Domain-Specific Acceleration and Auto-Parallelization of Legacy Scientific Code in FORTRAN 77 using Source-to-Source Compilation

    OpenAIRE

    Vanderbauwhede, Wim; Davidson, Gavin

    2017-01-01

    Massively parallel accelerators such as GPGPUs, manycores and FPGAs represent a powerful and affordable tool for scientists who look to speed up simulations of complex systems. However, porting code to such devices requires a detailed understanding of heterogeneous programming tools and effective strategies for parallelization. In this paper we present a source to source compilation approach with whole-program analysis to automatically transform single-threaded FORTRAN 77 legacy code into Ope...

  4. Automating RPM Creation from a Source Code Repository

    Science.gov (United States)

    2012-02-01

    apps/usr --with- libpq=/apps/ postgres make rm -rf $RPM_BUILD_ROOT umask 0077 mkdir -p $RPM_BUILD_ROOT/usr/local/bin mkdir -p $RPM_BUILD_ROOT...from a source code repository. %pre %prep %setup %build ./autogen.sh ; ./configure --with-db=/apps/db --with-libpq=/apps/ postgres make

  5. Translation Techniques

    OpenAIRE

    Marcia Pinheiro

    2015-01-01

    In this paper, we discuss three translation techniques: literal, cultural, and artistic. Literal translation is a well-known technique, which means that it is quite easy to find sources on the topic. Cultural and artistic translation may be new terms. Whilst cultural translation focuses on matching contexts, artistic translation focuses on matching reactions. Because literal translation matches only words, it is not hard to find situations in which we should not use this technique.  Because a...

  6. Development of in-vessel source term analysis code, tracer

    International Nuclear Information System (INIS)

    Miyagi, K.; Miyahara, S.

    1996-01-01

    Analyses of radionuclide transport in fuel failure accidents (generally referred to source terms) are considered to be important especially in the severe accident evaluation. The TRACER code has been developed to realistically predict the time dependent behavior of FPs and aerosols within the primary cooling system for wide range of fuel failure events. This paper presents the model description, results of validation study, the recent model advancement status of the code, and results of check out calculations under reactor conditions. (author)

  7. Source Coding in Networks with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2016-01-01

    results to a joint source coding and denoising problem. We consider a network with a centralized topology and a given weighted sum-rate constraint, where the received signals at the center are to be fused to maximize the output SNR while enforcing no linear distortion. We show that one can design...

  8. Use of source term code package in the ELEBRA MX-850 system

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.A.

    1988-12-01

    The implantation of source term code package in the ELEBRA-MX850 system is presented. The source term is formed when radioactive materials generated in nuclear fuel leakage toward containment and the external environment to reactor containment. The implantated version in the ELEBRA system are composed of five codes: MARCH 3, TRAPMELT 3, THCCA, VANESA and NAVA. The original example case was used. The example consists of a small loca accident in a PWR type reactor. A sensitivity study for the TRAPMELT 3 code was carried out, modifying the 'TIME STEP' to estimate the processing time of CPU for executing the original example case. (M.C.K.) [pt

  9. Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.

    Science.gov (United States)

    Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile

    2016-01-01

    This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.

  10. Translation techniques for distributed-shared memory programming models

    Energy Technology Data Exchange (ETDEWEB)

    Fuller, Douglas James [Iowa State Univ., Ames, IA (United States)

    2005-01-01

    The high performance computing community has experienced an explosive improvement in distributed-shared memory hardware. Driven by increasing real-world problem complexity, this explosion has ushered in vast numbers of new systems. Each new system presents new challenges to programmers and application developers. Part of the challenge is adapting to new architectures with new performance characteristics. Different vendors release systems with widely varying architectures that perform differently in different situations. Furthermore, since vendors need only provide a single performance number (total MFLOPS, typically for a single benchmark), they only have strong incentive initially to optimize the API of their choice. Consequently, only a fraction of the available APIs are well optimized on most systems. This causes issues porting and writing maintainable software, let alone issues for programmers burdened with mastering each new API as it is released. Also, programmers wishing to use a certain machine must choose their API based on the underlying hardware instead of the application. This thesis argues that a flexible, extensible translator for distributed-shared memory APIs can help address some of these issues. For example, a translator might take as input code in one API and output an equivalent program in another. Such a translator could provide instant porting for applications to new systems that do not support the application's library or language natively. While open-source APIs are abundant, they do not perform optimally everywhere. A translator would also allow performance testing using a single base code translated to a number of different APIs. Most significantly, this type of translator frees programmers to select the most appropriate API for a given application based on the application (and developer) itself instead of the underlying hardware.

  11. Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment

    Directory of Open Access Journals (Sweden)

    Oscar Karnalim

    2017-01-01

    Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.

  12. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks †

    Science.gov (United States)

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-01-01

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption. PMID:27409616

  13. Translation: between what can be translated and what must be translated

    Directory of Open Access Journals (Sweden)

    Magda Jeanrenaud

    2016-02-01

    Full Text Available Starting from a disconcerting interpretation of Jacques Derrida, our analysis aims at investigating and also tries to explain the blockage which appears in the English, French and Romanian translations (signed by Maurice de Gandillac, Antoine Berman, Laurent Lamy, Alexis Nouss, Harry Zohn, Steven Rendall, Martine Broda, Catrinel Pleșu etc. of a well-known text of Walter Benjamin, Die Aufgabe des Übersetzers, when translators transpose in their target languages the two quotations given by Benjamin: one of Mallarmé, left untranslated in the source text, and another, signed by Pannwitz. The fact is that both quotations have something in common: a discoursive form which results from an unusual syntax, as if they were already, in a certain sense, „translations”. As if the translators feared—a feature of the translator’s psychology?—not to render their text sufficiently accessible, even when the source text is not intended to be accessible. Hence the painful dilemma of the intentional fallacy (not only of the text to be translated.

  14. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    Science.gov (United States)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  15. Microdosimetry computation code of internal sources - MICRODOSE 1

    International Nuclear Information System (INIS)

    Li Weibo; Zheng Wenzhong; Ye Changqing

    1995-01-01

    This paper describes a microdosimetry computation code, MICRODOSE 1, on the basis of the following described methods: (1) the method of calculating f 1 (z) for charged particle in the unit density tissues; (2) the method of calculating f(z) for a point source; (3) the method of applying the Fourier transform theory to the calculation of the compound Poisson process; (4) the method of using fast Fourier transform technique to determine f(z) and, giving some computed examples based on the code, MICRODOSE 1, including alpha particles emitted from 239 Pu in the alveolar lung tissues and from radon progeny RaA and RAC in the human respiratory tract. (author). 13 refs., 6 figs

  16. Source Code Vulnerabilities in IoT Software Systems

    Directory of Open Access Journals (Sweden)

    Saleh Mohamed Alnaeli

    2017-08-01

    Full Text Available An empirical study that examines the usage of known vulnerable statements in software systems developed in C/C++ and used for IoT is presented. The study is conducted on 18 open source systems comprised of millions of lines of code and containing thousands of files. Static analysis methods are applied to each system to determine the number of unsafe commands (e.g., strcpy, strcmp, and strlen that are well-known among research communities to cause potential risks and security concerns, thereby decreasing a system’s robustness and quality. These unsafe statements are banned by many companies (e.g., Microsoft. The use of these commands should be avoided from the start when writing code and should be removed from legacy code over time as recommended by new C/C++ language standards. Each system is analyzed and the distribution of the known unsafe commands is presented. Historical trends in the usage of the unsafe commands of 7 of the systems are presented to show how the studied systems evolved over time with respect to the vulnerable code. The results show that the most prevalent unsafe command used for most systems is memcpy, followed by strlen. These results can be used to help train software developers on secure coding practices so that they can write higher quality software systems.

  17. Multilingual Practices in Contemporary and Historical Contexts: Interfaces between Code-Switching and Translation

    Science.gov (United States)

    Kolehmainen, Leena; Skaffari, Janne

    2016-01-01

    This article serves as an introduction to a collection of four articles on multilingual practices in speech and writing, exploring both contemporary and historical sources. It not only introduces the articles but also discusses the scope and definitions of code-switching, attitudes towards multilingual interaction and, most pertinently, the…

  18. Source-term model for the SYVAC3-NSURE performance assessment code

    International Nuclear Information System (INIS)

    Rowat, J.H.; Rattan, D.S.; Dolinar, G.M.

    1996-11-01

    Radionuclide contaminants in wastes emplaced in disposal facilities will not remain in those facilities indefinitely. Engineered barriers will eventually degrade, allowing radioactivity to escape from the vault. The radionuclide release rate from a low-level radioactive waste (LLRW) disposal facility, the source term, is a key component in the performance assessment of the disposal system. This report describes the source-term model that has been implemented in Ver. 1.03 of the SYVAC3-NSURE (Systems Variability Analysis Code generation 3-Near Surface Repository) code. NSURE is a performance assessment code that evaluates the impact of near-surface disposal of LLRW through the groundwater pathway. The source-term model described here was developed for the Intrusion Resistant Underground Structure (IRUS) disposal facility, which is a vault that is to be located in the unsaturated overburden at AECL's Chalk River Laboratories. The processes included in the vault model are roof and waste package performance, and diffusion, advection and sorption of radionuclides in the vault backfill. The model presented here was developed for the IRUS vault; however, it is applicable to other near-surface disposal facilities. (author). 40 refs., 6 figs

  19. Bean Soup Translation: Flexible, Linguistically-Motivated Syntax for Machine Translation

    Science.gov (United States)

    Mehay, Dennis Nolan

    2012-01-01

    Machine translation (MT) systems attempt to translate texts from one language into another by translating words from a "source language" and rearranging them into fluent utterances in a "target language." When the two languages organize concepts in very different ways, knowledge of their general sentence structure, or…

  20. SmartR: an open-source platform for interactive visual analytics for translational research data.

    Science.gov (United States)

    Herzinger, Sascha; Gu, Wei; Satagopam, Venkata; Eifes, Serge; Rege, Kavita; Barbosa-Silva, Adriano; Schneider, Reinhard

    2017-07-15

    In translational research, efficient knowledge exchange between the different fields of expertise is crucial. An open platform that is capable of storing a multitude of data types such as clinical, pre-clinical or OMICS data combined with strong visual analytical capabilities will significantly accelerate the scientific progress by making data more accessible and hypothesis generation easier. The open data warehouse tranSMART is capable of storing a variety of data types and has a growing user community including both academic institutions and pharmaceutical companies. tranSMART, however, currently lacks interactive and dynamic visual analytics and does not permit any post-processing interaction or exploration. For this reason, we developed SmartR , a plugin for tranSMART, that equips the platform not only with several dynamic visual analytical workflows, but also provides its own framework for the addition of new custom workflows. Modern web technologies such as D3.js or AngularJS were used to build a set of standard visualizations that were heavily improved with dynamic elements. The source code is licensed under the Apache 2.0 License and is freely available on GitHub: https://github.com/transmart/SmartR . reinhard.schneider@uni.lu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  1. Verification test calculations for the Source Term Code Package

    International Nuclear Information System (INIS)

    Denning, R.S.; Wooton, R.O.; Alexander, C.A.; Curtis, L.A.; Cybulskis, P.; Gieseke, J.A.; Jordan, H.; Lee, K.W.; Nicolosi, S.L.

    1986-07-01

    The purpose of this report is to demonstrate the reasonableness of the Source Term Code Package (STCP) results. Hand calculations have been performed spanning a wide variety of phenomena within the context of a single accident sequence, a loss of all ac power with late containment failure, in the Peach Bottom (BWR) plant, and compared with STCP results. The report identifies some of the limitations of the hand calculation effort. The processes involved in a core meltdown accident are complex and coupled. Hand calculations by their nature must deal with gross simplifications of these processes. Their greatest strength is as an indicator that a computer code contains an error, for example that it doesn't satisfy basic conservation laws, rather than in showing the analysis accurately represents reality. Hand calculations are an important element of verification but they do not satisfy the need for code validation. The code validation program for the STCP is a separate effort. In general the hand calculation results show that models used in the STCP codes (e.g., MARCH, TRAP-MELT, VANESA) obey basic conservation laws and produce reasonable results. The degree of agreement and significance of the comparisons differ among the models evaluated. 20 figs., 26 tabs

  2. Imaging x-ray sources at a finite distance in coded-mask instruments

    International Nuclear Information System (INIS)

    Donnarumma, Immacolata; Pacciani, Luigi; Lapshov, Igor; Evangelista, Yuri

    2008-01-01

    We present a method for the correction of beam divergence in finite distance sources imaging through coded-mask instruments. We discuss the defocusing artifacts induced by the finite distance showing two different approaches to remove such spurious effects. We applied our method to one-dimensional (1D) coded-mask systems, although it is also applicable in two-dimensional systems. We provide a detailed mathematical description of the adopted method and of the systematics introduced in the reconstructed image (e.g., the fraction of source flux collected in the reconstructed peak counts). The accuracy of this method was tested by simulating pointlike and extended sources at a finite distance with the instrumental setup of the SuperAGILE experiment, the 1D coded-mask x-ray imager onboard the AGILE (Astro-rivelatore Gamma a Immagini Leggero) mission. We obtained reconstructed images of good quality and high source location accuracy. Finally we show the results obtained by applying this method to real data collected during the calibration campaign of SuperAGILE. Our method was demonstrated to be a powerful tool to investigate the imaging response of the experiment, particularly the absorption due to the materials intercepting the line of sight of the instrument and the conversion between detector pixel and sky direction

  3. Characterization of in vitro translation products

    International Nuclear Information System (INIS)

    Jagus, R.

    1987-01-01

    This chapter describes the characterization of in vitro translation products by the most commonly used techniques. The methods include SDS-polyacrylamide gel electrophoresis (SDS-PAGE), combined with immunoprecipitation and/or fluorography of [ 35 S]methionine-labeled translation products. The other frequently used characterization tool, translation of hybrid-selected mRNA or hybrid-arrested translation, is treated separately in this volume. Methods are also given for the recognition of mRNAs coding for secreted or membrane proteins

  4. SOURCES-3A: A code for calculating (α, n), spontaneous fission, and delayed neutron sources and spectra

    International Nuclear Information System (INIS)

    Perry, R.T.; Wilson, W.B.; Charlton, W.S.

    1998-04-01

    In many systems, it is imperative to have accurate knowledge of all significant sources of neutrons due to the decay of radionuclides. These sources can include neutrons resulting from the spontaneous fission of actinides, the interaction of actinide decay α-particles in (α,n) reactions with low- or medium-Z nuclides, and/or delayed neutrons from the fission products of actinides. Numerous systems exist in which these neutron sources could be important. These include, but are not limited to, clean and spent nuclear fuel (UO 2 , ThO 2 , MOX, etc.), enrichment plant operations (UF 6 , PuF 4 , etc.), waste tank studies, waste products in borosilicate glass or glass-ceramic mixtures, and weapons-grade plutonium in storage containers. SOURCES-3A is a computer code that determines neutron production rates and spectra from (α,n) reactions, spontaneous fission, and delayed neutron emission due to the decay of radionuclides in homogeneous media (i.e., a mixture of α-emitting source material and low-Z target material) and in interface problems (i.e., a slab of α-emitting source material in contact with a slab of low-Z target material). The code is also capable of calculating the neutron production rates due to (α,n) reactions induced by a monoenergetic beam of α-particles incident on a slab of target material. Spontaneous fission spectra are calculated with evaluated half-life, spontaneous fission branching, and Watt spectrum parameters for 43 actinides. The (α,n) spectra are calculated using an assumed isotropic angular distribution in the center-of-mass system with a library of 89 nuclide decay α-particle spectra, 24 sets of measured and/or evaluated (α,n) cross sections and product nuclide level branching fractions, and functional α-particle stopping cross sections for Z < 106. The delayed neutron spectra are taken from an evaluated library of 105 precursors. The code outputs the magnitude and spectra of the resultant neutron source. It also provides an

  5. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Valenzise G

    2009-01-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  6. Beyond the Business Model: Incentives for Organizations to Publish Software Source Code

    Science.gov (United States)

    Lindman, Juho; Juutilainen, Juha-Pekka; Rossi, Matti

    The software stack opened under Open Source Software (OSS) licenses is growing rapidly. Commercial actors have released considerable amounts of previously proprietary source code. These actions beg the question why companies choose a strategy based on giving away software assets? Research on outbound OSS approach has tried to answer this question with the concept of the “OSS business model”. When studying the reasons for code release, we have observed that the business model concept is too generic to capture the many incentives organizations have. Conversely, in this paper we investigate empirically what the companies’ incentives are by means of an exploratory case study of three organizations in different stages of their code release. Our results indicate that the companies aim to promote standardization, obtain development resources, gain cost savings, improve the quality of software, increase the trustworthiness of software, or steer OSS communities. We conclude that future research on outbound OSS could benefit from focusing on the heterogeneous incentives for code release rather than on revenue models.

  7. Translation and Culture:Translation as a Cross-cultural Mediation

    Institute of Scientific and Technical Information of China (English)

    叶谋锦

    2013-01-01

    Translation is a complex activity which involves language competence as well as proficiency in multiculturalism. From the perspective of multiculturalism, translation resembles recreation of source text by grasping essential meanings to produce a sub-tle target text which can be clearly perceived by target readers. Ignoring cultural issues can present serious mistranslations in the field of advertising translation. This paper aims to explore the significance of connotation confined by the framework of culture and point out that verbal translation is a dangerous inclination by illustrating three business examples. This paper argues that cross-cultural mediation plays an important role in translation.

  8. Forked and Integrated Variants In An Open-Source Firmware Project

    DEFF Research Database (Denmark)

    Stanciulescu, Stefan; Schulze, Sandro; Wasowski, Andrzej

    2015-01-01

    and interactive source management platforms such as Github. We study advantages and disadvantages of forking using the case of Marlin, an open source firmware for 3D printers. We find that many problems and advantages of cloning do translate to forking. Interestingly, the Marlin community uses both forking......Code cloning has been reported both on small (code fragments) and large (entire projects) scale. Cloning-in-the-large, or forking, is gaining ground as a reuse mechanism thanks to availability of better tools for maintaining forked project variants, hereunder distributed version control systems...

  9. Authorship attribution of source code by using back propagation neural network based on particle swarm optimization.

    Science.gov (United States)

    Yang, Xinyu; Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao

    2017-01-01

    Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead.

  10. Translating Proper Nouns: A Case Study on English Translation of Hafez's Lyrics

    Science.gov (United States)

    Shirinzadeh, Seyed Alireza; Mahadi, Tengku Sepora Tengku

    2014-01-01

    Proper nouns are regarded so simple that they might be taken for granted in translation explorations. Some may believe that they should not be translated in transmitting source texts to target texts. But, it is not the case; if one looks at present translations, he will notice that different strategies might be applied for translating proper…

  11. Eu-NORSEWInD - Assessment of Viability of Open Source CFD Code for the Wind Industry

    DEFF Research Database (Denmark)

    Stickland, Matt; Scanlon, Tom; Fabre, Sylvie

    2009-01-01

    Part of the overall NORSEWInD project is the use of LiDAR remote sensing (RS) systems mounted on offshore platforms to measure wind velocity profiles at a number of locations offshore. The data acquired from the offshore RS measurements will be fed into a large and novel wind speed dataset suitab...... between the results of simulations created by the commercial code FLUENT and the open source code OpenFOAM. An assessment of the ease with which the open source code can be used is also included....

  12. Health physics source document for codes of practice

    International Nuclear Information System (INIS)

    Pearson, G.W.; Meggitt, G.C.

    1989-05-01

    Personnel preparing codes of practice often require basic Health Physics information or advice relating to radiological protection problems and this document is written primarily to supply such information. Certain technical terms used in the text are explained in the extensive glossary. Due to the pace of change in the field of radiological protection it is difficult to produce an up-to-date document. This document was compiled during 1988 however, and therefore contains the principle changes brought about by the introduction of the Ionising Radiations Regulations (1985). The paper covers the nature of ionising radiation, its biological effects and the principles of control. It is hoped that the document will provide a useful source of information for both codes of practice and wider areas and stimulate readers to study radiological protection issues in greater depth. (author)

  13. Cultural Context and Translation

    Institute of Scientific and Technical Information of China (English)

    张敏

    2009-01-01

    cultural context plays an important role in translation. Because translation is a cross-culture activity, the culture context that influ-ences translating is consisted of both the culture contexts of source language and target language. This article firstly analyzes the concept of context and cultural context, then according to the procedure of translating classifies cultural context into two stages and talks about how they respectively influence translating.

  14. Low complexity source and channel coding for mm-wave hybrid fiber-wireless links

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Vegas Olmos, Juan José; Pang, Xiaodan

    2014-01-01

    We report on the performance of channel and source coding applied for an experimentally realized hybrid fiber-wireless W-band link. Error control coding performance is presented for a wireless propagation distance of 3 m and 20 km fiber transmission. We report on peak signal-to-noise ratio perfor...

  15. DNA Translator and Aligner: HyperCard utilities to aid phylogenetic analysis of molecules.

    Science.gov (United States)

    Eernisse, D J

    1992-04-01

    DNA Translator and Aligner are molecular phylogenetics HyperCard stacks for Macintosh computers. They manipulate sequence data to provide graphical gene mapping, conversions, translations and manual multiple-sequence alignment editing. DNA Translator is able to convert documented GenBank or EMBL documented sequences into linearized, rescalable gene maps whose gene sequences are extractable by clicking on the corresponding map button or by selection from a scrolling list. Provided gene maps, complete with extractable sequences, consist of nine metazoan, one yeast, and one ciliate mitochondrial DNAs and three green plant chloroplast DNAs. Single or multiple sequences can be manipulated to aid in phylogenetic analysis. Sequences can be translated between nucleic acids and proteins in either direction with flexible support of alternate genetic codes and ambiguous nucleotide symbols. Multiple aligned sequence output from diverse sources can be converted to Nexus, Hennig86 or PHYLIP format for subsequent phylogenetic analysis. Input or output alignments can be examined with Aligner, a convenient accessory stack included in the DNA Translator package. Aligner is an editor for the manual alignment of up to 100 sequences that toggles between display of matched characters and normal unmatched sequences. DNA Translator also generates graphic displays of amino acid coding and codon usage frequency relative to all other, or only synonymous, codons for approximately 70 select organism-organelle combinations. Codon usage data is compatible with spreadsheet or UWGCG formats for incorporation of additional molecules of interest. The complete package is available via anonymous ftp and is free for non-commercial uses.

  16. Eye-movements During Translation

    DEFF Research Database (Denmark)

    Balling, Laura Winther

    2013-01-01

    texts as well as both eye-tracking and keylogging data. Based on this database, I present a large-scale analysis of gaze on the source text based on 91 translators' translations of six different texts from English into four different target languages. I use mixed-effects modelling to compare from......, and variables indexing the alignment between the source and target texts. The results are related to current models of translation processes and reading and compared to a parallel analysis of production time....

  17. Fine-Grained Energy Modeling for the Source Code of a Mobile Application

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick

    2016-01-01

    The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...

  18. Using Open Source Tools to Create a Mobile Optimized, Crowdsourced Translation Tool

    Directory of Open Access Journals (Sweden)

    Evviva Weinraub Lajoie

    2014-04-01

    Full Text Available In late 2012, OSU Libraries and Press partnered with Maria's Libraries, an NGO in Rural Kenya, to provide users the ability to crowdsource translations of folk tales and existing children's books into a variety of African languages, sub-languages, and dialects. Together, these two organizations have been creating a mobile optimized platform using open source libraries such as Wink Toolkit (a library which provides mobile-friendly interaction from a website and Globalize3 to allow for multiple translations of database entries in a Ruby on Rails application. Research regarding successes of similar tools has been utilized in providing a consistent user interface. The OSU Libraries & Press team delivered a proof-of-concept tool that has the opportunity to promote technology exploration, improve early childhood literacy, change the way we approach foreign language learning, and to provide opportunities for cost-effective, multi-language publishing.

  19. A plug-in to Eclipse for VHDL source codes: functionalities

    Science.gov (United States)

    Niton, B.; Poźniak, K. T.; Romaniuk, R. S.

    The paper presents an original application, written by authors, which supports writing and edition of source codes in VHDL language. It is a step towards fully automatic, augmented code writing for photonic and electronic systems, also systems based on FPGA and/or DSP processors. An implementation is described, based on VEditor. VEditor is a free license program. Thus, the work presented in this paper supplements and extends this free license. The introduction characterizes shortly available tools on the market which serve for aiding the design processes of electronic systems in VHDL. Particular attention was put on plug-ins to the Eclipse environment and Emacs program. There are presented detailed properties of the written plug-in such as: programming extension conception, and the results of the activities of formatter, re-factorizer, code hider, and other new additions to the VEditor program.

  20. Overview of the ArbiTER edge plasma eigenvalue code

    Science.gov (United States)

    Baver, Derek; Myra, James; Umansky, Maxim

    2011-10-01

    The Arbitrary Topology Equation Reader, or ArbiTER, is a flexible eigenvalue solver that is currently under development for plasma physics applications. The ArbiTER code builds on the equation parser framework of the existing 2DX code, extending it to include a topology parser. This will give the code the capability to model problems with complicated geometries (such as multiple X-points and scrape-off layers) or model equations with arbitrary numbers of dimensions (e.g. for kinetic analysis). In the equation parser framework, model equations are not included in the program's source code. Instead, an input file contains instructions for building a matrix from profile functions and elementary differential operators. The program then executes these instructions in a sequential manner. These instructions may also be translated into analytic form, thus giving the code transparency as well as flexibility. We will present an overview of how the ArbiTER code is to work, as well as preliminary results from early versions of this code. Work supported by the U.S. DOE.

  1. Using Open Source Tools to Create a Mobile Optimized, Crowdsourced Translation Tool

    OpenAIRE

    Evviva Weinraub Lajoie; Trey Terrell; Susan McEvoy; Eva Kaplan; Ariel Schwartz; Esther Ajambo

    2014-01-01

    In late 2012, OSU Libraries and Press partnered with Maria's Libraries, an NGO in Rural Kenya, to provide users the ability to crowdsource translations of folk tales and existing children's books into a variety of African languages, sub-languages, and dialects. Together, these two organizations have been creating a mobile optimized platform using open source libraries such as Wink Toolkit (a library which provides mobile-friendly interaction from a website) and Globalize3 to allow for multipl...

  2. A toolbox of lectins for translating the sugar code: the galectin network in phylogenesis and tumors.

    Science.gov (United States)

    Kaltner, H; Gabius, H-J

    2012-04-01

    Lectin histochemistry has revealed cell-type-selective glycosylation. It is under dynamic and spatially controlled regulation. Since their chemical properties allow carbohydrates to reach unsurpassed structural diversity in oligomers, they are ideal for high density information coding. Consequently, the concept of the sugar code assigns a functional dimension to the glycans of cellular glycoconjugates. Indeed, multifarious cell processes depend on specific recognition of glycans by their receptors (lectins), which translate the sugar-encoded information into effects. Duplication of ancestral genes and the following divergence of sequences account for the evolutionary dynamics in lectin families. Differences in gene number can even appear among closely related species. The adhesion/growth-regulatory galectins are selected as an instructive example to trace the phylogenetic diversification in several animals, most of them popular models in developmental and tumor biology. Chicken galectins are identified as a low-level-complexity set, thus singled out for further detailed analysis. The various operative means for establishing protein diversity among the chicken galectins are delineated, and individual characteristics in expression profiles discerned. To apply this galectin-fingerprinting approach in histopathology has potential for refining differential diagnosis and for obtaining prognostic assessments. On the grounds of in vitro work with tumor cells a strategically orchestrated co-regulation of galectin expression with presentation of cognate glycans is detected. This coordination epitomizes the far-reaching physiological significance of sugar coding.

  3. Trans-acting translational regulatory RNA binding proteins.

    Science.gov (United States)

    Harvey, Robert F; Smith, Tom S; Mulroney, Thomas; Queiroz, Rayner M L; Pizzinga, Mariavittoria; Dezi, Veronica; Villenueva, Eneko; Ramakrishna, Manasa; Lilley, Kathryn S; Willis, Anne E

    2018-05-01

    The canonical molecular machinery required for global mRNA translation and its control has been well defined, with distinct sets of proteins involved in the processes of translation initiation, elongation and termination. Additionally, noncanonical, trans-acting regulatory RNA-binding proteins (RBPs) are necessary to provide mRNA-specific translation, and these interact with 5' and 3' untranslated regions and coding regions of mRNA to regulate ribosome recruitment and transit. Recently it has also been demonstrated that trans-acting ribosomal proteins direct the translation of specific mRNAs. Importantly, it has been shown that subsets of RBPs often work in concert, forming distinct regulatory complexes upon different cellular perturbation, creating an RBP combinatorial code, which through the translation of specific subsets of mRNAs, dictate cell fate. With the development of new methodologies, a plethora of novel RNA binding proteins have recently been identified, although the function of many of these proteins within mRNA translation is unknown. In this review we will discuss these methodologies and their shortcomings when applied to the study of translation, which need to be addressed to enable a better understanding of trans-acting translational regulatory proteins. Moreover, we discuss the protein domains that are responsible for RNA binding as well as the RNA motifs to which they bind, and the role of trans-acting ribosomal proteins in directing the translation of specific mRNAs. This article is categorized under: RNA Interactions with Proteins and Other Molecules > RNA-Protein Complexes Translation > Translation Regulation Translation > Translation Mechanisms. © 2018 Medical Research Council and University of Cambridge. WIREs RNA published by Wiley Periodicals, Inc.

  4. Advertisement Translation under Skopos Theory

    Institute of Scientific and Technical Information of China (English)

    严妙

    2014-01-01

    This paper is an analysis of advertisement translation under skopos theory.It is explained that the nature of advertisement translation under skopos theory is reconstructing the information of the source text to persuade target audience.Three translation strategies are put forward in translating advertisements.

  5. WASTK: A Weighted Abstract Syntax Tree Kernel Method for Source Code Plagiarism Detection

    Directory of Open Access Journals (Sweden)

    Deqiang Fu

    2017-01-01

    Full Text Available In this paper, we introduce a source code plagiarism detection method, named WASTK (Weighted Abstract Syntax Tree Kernel, for computer science education. Different from other plagiarism detection methods, WASTK takes some aspects other than the similarity between programs into account. WASTK firstly transfers the source code of a program to an abstract syntax tree and then gets the similarity by calculating the tree kernel of two abstract syntax trees. To avoid misjudgment caused by trivial code snippets or frameworks given by instructors, an idea similar to TF-IDF (Term Frequency-Inverse Document Frequency in the field of information retrieval is applied. Each node in an abstract syntax tree is assigned a weight by TF-IDF. WASTK is evaluated on different datasets and, as a result, performs much better than other popular methods like Sim and JPlag.

  6. What Froze the Genetic Code?

    Directory of Open Access Journals (Sweden)

    Lluís Ribas de Pouplana

    2017-04-01

    Full Text Available The frozen accident theory of the Genetic Code was a proposal by Francis Crick that attempted to explain the universal nature of the Genetic Code and the fact that it only contains information for twenty amino acids. Fifty years later, it is clear that variations to the universal Genetic Code exist in nature and that translation is not limited to twenty amino acids. However, given the astonishing diversity of life on earth, and the extended evolutionary time that has taken place since the emergence of the extant Genetic Code, the idea that the translation apparatus is for the most part immobile remains true. Here, we will offer a potential explanation to the reason why the code has remained mostly stable for over three billion years, and discuss some of the mechanisms that allow species to overcome the intrinsic functional limitations of the protein synthesis machinery.

  7. What Froze the Genetic Code?

    Science.gov (United States)

    Ribas de Pouplana, Lluís; Torres, Adrian Gabriel; Rafels-Ybern, Àlbert

    2017-04-05

    The frozen accident theory of the Genetic Code was a proposal by Francis Crick that attempted to explain the universal nature of the Genetic Code and the fact that it only contains information for twenty amino acids. Fifty years later, it is clear that variations to the universal Genetic Code exist in nature and that translation is not limited to twenty amino acids. However, given the astonishing diversity of life on earth, and the extended evolutionary time that has taken place since the emergence of the extant Genetic Code, the idea that the translation apparatus is for the most part immobile remains true. Here, we will offer a potential explanation to the reason why the code has remained mostly stable for over three billion years, and discuss some of the mechanisms that allow species to overcome the intrinsic functional limitations of the protein synthesis machinery.

  8. Design of a Code-Maker Translator Assistive Input Device with a Contest Fuzzy Recognition Algorithm for the Severely Disabled

    Directory of Open Access Journals (Sweden)

    Chung-Min Wu

    2015-01-01

    Full Text Available This study developed an assistive system for the severe physical disabilities, named “code-maker translator assistive input device” which utilizes a contest fuzzy recognition algorithm and Morse codes encoding to provide the keyboard and mouse functions for users to access a standard personal computer, smartphone, and tablet PC. This assistive input device has seven features that are small size, easy installing, modular design, simple maintenance, functionality, very flexible input interface selection, and scalability of system functions, when this device combined with the computer applications software or APP programs. The users with severe physical disabilities can use this device to operate the various functions of computer, smartphone, and tablet PCs, such as sending e-mail, Internet browsing, playing games, and controlling home appliances. A patient with a brain artery malformation participated in this study. The analysis result showed that the subject could make himself familiar with operating of the long/short tone of Morse code in one month. In the future, we hope this system can help more people in need.

  9. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    Science.gov (United States)

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  10. tranSMART: An Open Source and Community-Driven Informatics and Data Sharing Platform for Clinical and Translational Research.

    Science.gov (United States)

    Athey, Brian D; Braxenthaler, Michael; Haas, Magali; Guo, Yike

    2013-01-01

    tranSMART is an emerging global open source public private partnership community developing a comprehensive informatics-based analysis and data-sharing cloud platform for clinical and translational research. The tranSMART consortium includes pharmaceutical and other companies, not-for-profits, academic entities, patient advocacy groups, and government stakeholders. The tranSMART value proposition relies on the concept that the global community of users, developers, and stakeholders are the best source of innovation for applications and for useful data. Continued development and use of the tranSMART platform will create a means to enable "pre-competitive" data sharing broadly, saving money and, potentially accelerating research translation to cures. Significant transformative effects of tranSMART includes 1) allowing for all its user community to benefit from experts globally, 2) capturing the best of innovation in analytic tools, 3) a growing 'big data' resource, 4) convergent standards, and 5) new informatics-enabled translational science in the pharma, academic, and not-for-profit sectors.

  11. Rascal: A domain specific language for source code analysis and manipulation

    NARCIS (Netherlands)

    P. Klint (Paul); T. van der Storm (Tijs); J.J. Vinju (Jurgen); A. Walenstein; S. Schuppe

    2009-01-01

    htmlabstractMany automated software engineering tools require tight integration of techniques for source code analysis and manipulation. State-of-the-art tools exist for both, but the domains have remained notoriously separate because different computational paradigms fit each domain best. This

  12. RASCAL : a domain specific language for source code analysis and manipulationa

    NARCIS (Netherlands)

    Klint, P.; Storm, van der T.; Vinju, J.J.

    2009-01-01

    Many automated software engineering tools require tight integration of techniques for source code analysis and manipulation. State-of-the-art tools exist for both, but the domains have remained notoriously separate because different computational paradigms fit each domain best. This impedance

  13. Financial and clinical governance implications of clinical coding accuracy in neurosurgery: a multidisciplinary audit.

    Science.gov (United States)

    Haliasos, N; Rezajooi, K; O'neill, K S; Van Dellen, J; Hudovsky, Anita; Nouraei, Sar

    2010-04-01

    Clinical coding is the translation of documented clinical activities during an admission to a codified language. Healthcare Resource Groupings (HRGs) are derived from coding data and are used to calculate payment to hospitals in England, Wales and Scotland and to conduct national audit and benchmarking exercises. Coding is an error-prone process and an understanding of its accuracy within neurosurgery is critical for financial, organizational and clinical governance purposes. We undertook a multidisciplinary audit of neurosurgical clinical coding accuracy. Neurosurgeons trained in coding assessed the accuracy of 386 patient episodes. Where clinicians felt a coding error was present, the case was discussed with an experienced clinical coder. Concordance between the initial coder-only clinical coding and the final clinician-coder multidisciplinary coding was assessed. At least one coding error occurred in 71/386 patients (18.4%). There were 36 diagnosis and 93 procedure errors and in 40 cases, the initial HRG changed (10.4%). Financially, this translated to pound111 revenue-loss per patient episode and projected to pound171,452 of annual loss to the department. 85% of all coding errors were due to accumulation of coding changes that occurred only once in the whole data set. Neurosurgical clinical coding is error-prone. This is financially disadvantageous and with the coding data being the source of comparisons within and between departments, coding inaccuracies paint a distorted picture of departmental activity and subspecialism in audit and benchmarking. Clinical engagement improves accuracy and is encouraged within a clinical governance framework.

  14. Why Translation Is Difficult

    DEFF Research Database (Denmark)

    Carl, Michael; Schaeffer, Moritz Jonas

    2017-01-01

    The paper develops a definition of translation literality that is based on the syntactic and semantic similarity of the source and the target texts. We provide theoretical and empirical evidence that absolute literal translations are easy to produce. Based on a multilingual corpus of alternative...... translations we investigate the effects of cross-lingual syntactic and semantic distance on translation production times and find that non-literality makes from-scratch translation and post-editing difficult. We show that statistical machine translation systems encounter even more difficulties with non-literality....

  15. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.

    Science.gov (United States)

    Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).

  16. Documentation for grants equal to tax model: Volume 3, Source code

    International Nuclear Information System (INIS)

    Boryczka, M.K.

    1986-01-01

    The GETT model is capable of forecasting the amount of tax liability associated with all property owned and all activities undertaken by the US Department of Energy (DOE) in site characterization and repository development. The GETT program is a user-friendly, menu-driven model developed using dBASE III/trademark/, a relational data base management system. The data base for GETT consists primarily of eight separate dBASE III/trademark/ files corresponding to each of the eight taxes (real property, personal property, corporate income, franchise, sales, use, severance, and excise) levied by State and local jurisdictions on business property and activity. Additional smaller files help to control model inputs and reporting options. Volume 3 of the GETT model documentation is the source code. The code is arranged primarily by the eight tax types. Other code files include those for JURISDICTION, SIMULATION, VALIDATION, TAXES, CHANGES, REPORTS, GILOT, and GETT. The code has been verified through hand calculations

  17. RETRANS - A tool to verify the functional equivalence of automatically generated source code with its specification

    International Nuclear Information System (INIS)

    Miedl, H.

    1998-01-01

    Following the competent technical standards (e.g. IEC 880) it is necessary to verify each step in the development process of safety critical software. This holds also for the verification of automatically generated source code. To avoid human errors during this verification step and to limit the cost effort a tool should be used which is developed independently from the development of the code generator. For this purpose ISTec has developed the tool RETRANS which demonstrates the functional equivalence of automatically generated source code with its underlying specification. (author)

  18. Translation of the model plant of the CN code TRAC-BF1 Cofrentes of a SNAP-TRACE

    International Nuclear Information System (INIS)

    Escriva, A.; Munuz-Cobo, J. L.; Concejal, A.; Melara, J.; Albendea, M.

    2012-01-01

    It aims to develop a three-dimensional model of the CN Cofrentes whose consistent results Compared with those in current use programs (TRAC-BFl, RETRAN) validated with data of the plant. This comparison should be done globally and that you can not carry a compensation of errors. To check the correct translation of the results obtained have been compared with TRACE and the programs currently in use and the relevant adjustments have been made, taking into account that both the correlations and models are different codes. During the completion of this work we have detected several errors that must be corrected in future versions of these tools.

  19. A Source Term Calculation for the APR1400 NSSS Auxiliary System Components Using the Modified SHIELD Code

    International Nuclear Information System (INIS)

    Park, Hong Sik; Kim, Min; Park, Seong Chan; Seo, Jong Tae; Kim, Eun Kee

    2005-01-01

    The SHIELD code has been used to calculate the source terms of NSSS Auxiliary System (comprising CVCS, SIS, and SCS) components of the OPR1000. Because the code had been developed based upon the SYSTEM80 design and the APR1400 NSSS Auxiliary System design is considerably changed from that of SYSTEM80 or OPR1000, the SHIELD code cannot be used directly for APR1400 radiation design. Thus the hand-calculation is needed for the portion of design changes using the results of the SHIELD code calculation. In this study, the SHIELD code is modified to incorporate the APR1400 design changes and the source term calculation is performed for the APR1400 NSSS Auxiliary System components

  20. TRANSLATING AS A PURPOSEFUL ACTIVITY:A PROSPECTIVE APPROACH

    Directory of Open Access Journals (Sweden)

    Christiane Nord

    2006-01-01

    Full Text Available Taking a prospective approach to translation, translators choose their translation strategies according to the purpose or function the translated text is intended to fulfill for the target audience. Since communicative purposes need certain conditions in order to work, it is the translator's task to analyze the conditions of the target culture and to decide whether, and how, the source-text purposes can work for the target audience according to the specifications of the translation brief. If the target-culture conditions differ from those of the source culture, there are usually two basic options: either to transform the text in such a way that it can work under target-culture conditions (= instrumental translation, or to replace the source-text functions by their respective meta-functions (= documentary translation.

  1. Comparison of TG-43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes.

    Science.gov (United States)

    Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S

    2016-03-08

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes.

  2. Tangent: Automatic Differentiation Using Source Code Transformation in Python

    OpenAIRE

    van Merriënboer, Bart; Wiltschko, Alexander B.; Moldovan, Dan

    2017-01-01

    Automatic differentiation (AD) is an essential primitive for machine learning programming systems. Tangent is a new library that performs AD using source code transformation (SCT) in Python. It takes numeric functions written in a syntactic subset of Python and NumPy as input, and generates new Python functions which calculate a derivative. This approach to automatic differentiation is different from existing packages popular in machine learning, such as TensorFlow and Autograd. Advantages ar...

  3. New Trends outside the Translation Classroom

    Directory of Open Access Journals (Sweden)

    Silvia Martínez Martínez

    2014-09-01

    Full Text Available This paper is based on the study of different elements at the University of Granada’s Faculty of Translation and Interpreting and seeks to elaborate a prototype for a multilingual and accessible audio guide (audio description, SDHH and Spanish sign language interpretation. We defend a new methodology, one that focuses on teaching the translation process from previous museum-based learning experiences in the translation classroom using QR codes. Our main goal is to innovate translation-related teaching based on the new approaches acquired through learning workshop perspectives. In this sense, we will offer an ideal framework in developing the new concept of translation learning. This concept involves systemising a new means of learning and organising the realities of translation itself, encompassing objectives, competences, methodology and evaluation.

  4. The Translator's Turn: in the Cultural Turn

    Institute of Scientific and Technical Information of China (English)

    徐玮玮

    2003-01-01

    @@ Introduction: Douglas Robinson rose to the defense of the " atheoretical" American literary translator in The Translator's Turn (1991). Here, I borrowed the title from him, but I will write my paper in the thought of the translator's role in translating. In his book, Robinson argued that the literary translator embodies an integration of feeling and thought, of intuition and systematization. In analyzing the " turn" that the translator take from the source text to the target text, Robinson offered a " dialogical" model, that is the translator's dialogical engagement with the source language and with the ethic of the target language. Robinson allows for the translator to intervene, subvert, divert, even entertain, emphasizing the creative aspect of literary translation. The translation linguists, scientists, and philosophers have had their chance at translation theory; now it is time, he argued, for the literary translators to have their " turn".

  5. Quantitative Profiling of Peptides from RNAs classified as non-coding

    Science.gov (United States)

    Prabakaran, Sudhakaran; Hemberg, Martin; Chauhan, Ruchi; Winter, Dominic; Tweedie-Cullen, Ry Y.; Dittrich, Christian; Hong, Elizabeth; Gunawardena, Jeremy; Steen, Hanno; Kreiman, Gabriel; Steen, Judith A.

    2014-01-01

    Only a small fraction of the mammalian genome codes for messenger RNAs destined to be translated into proteins, and it is generally assumed that a large portion of transcribed sequences - including introns and several classes of non-coding RNAs (ncRNAs) do not give rise to peptide products. A systematic examination of translation and physiological regulation of ncRNAs has not been conducted. Here, we use computational methods to identify the products of non-canonical translation in mouse neurons by analyzing unannotated transcripts in combination with proteomic data. This study supports the existence of non-canonical translation products from both intragenic and extragenic genomic regions, including peptides derived from anti-sense transcripts and introns. Moreover, the studied novel translation products exhibit temporal regulation similar to that of proteins known to be involved in neuronal activity processes. These observations highlight a potentially large and complex set of biologically regulated translational events from transcripts formerly thought to lack coding potential. PMID:25403355

  6. Code of Conduct on the Safety and Security of Radioactive Sources and the Supplementary Guidance on the Import and Export of Radioactive Sources

    International Nuclear Information System (INIS)

    2005-01-01

    In operative paragraph 4 of its resolution GC(47)/RES/7.B, the General Conference, having welcomed the approval by the Board of Governors of the revised IAEA Code of Conduct on the Safety and Security of Radioactive Sources (GC(47)/9), and while recognizing that the Code is not a legally binding instrument, urged each State to write to the Director General that it fully supports and endorses the IAEA's efforts to enhance the safety and security of radioactive sources and is working toward following the guidance contained in the IAEA Code of Conduct. In operative paragraph 5, the Director General was requested to compile, maintain and publish a list of States that have made such a political commitment. The General Conference, in operative paragraph 6, recognized that this procedure 'is an exceptional one, having no legal force and only intended for information, and therefore does not constitute a precedent applicable to other Codes of Conduct of the Agency or of other bodies belonging to the United Nations system'. In operative paragraph 7 of resolution GC(48)/RES/10.D, the General Conference welcomed the fact that more than 60 States had made political commitments with respect to the Code in line with resolution GC(47)/RES/7.B and encouraged other States to do so. In operative paragraph 8 of resolution GC(48)/RES/10.D, the General Conference further welcomed the approval by the Board of Governors of the Supplementary Guidance on the Import and Export of Radioactive Sources (GC(48)/13), endorsed this Guidance while recognizing that it is not legally binding, noted that more than 30 countries had made clear their intention to work towards effective import and export controls by 31 December 2005, and encouraged States to act in accordance with the Guidance on a harmonized basis and to notify the Director General of their intention to do so as supplementary information to the Code of Conduct, recalling operative paragraph 6 of resolution GC(47)/RES/7.B. 4. The

  7. WADOSE, Radiation Source in Vitrification Waste Storage Apparatus

    International Nuclear Information System (INIS)

    Morita, Jun-ichi; Tashiro, Shingo; Kikkawa, Shizuo; Tsuboi, Takashi

    2007-01-01

    1 - Description of program or function: This is a radiation shielding program which analyzes unknown dose rates using known radiation sources. It can also evaluate radiation sources from measured dose rates. For instance, dose rates measured at several points in the hot cell of WASTEF are introduced into WADOS, and as a result, Ci of radiation sources and their positions are estimated with structural arrangement data of the WASTEF cells. The later functional addition is very useful for actual operation of a hot cell and others. NEA-1142/02: The code was originally written in non standard Fortran dialect and has been fully translated into Fortran 90, Fortran 77 compatible. 2 - Method of solution: Point kernel ray tracing method (the same method as QAD code). 3 - Restrictions on the complexity of the problem: Modeling of source form for input is available for cylinder, plate, point and others which are simplified geometrically

  8. Hybrid digital-analog coding with bandwidth expansion for correlated Gaussian sources under Rayleigh fading

    Science.gov (United States)

    Yahampath, Pradeepa

    2017-12-01

    Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.

  9. Comparison of TG‐43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes

    Science.gov (United States)

    Zaker, Neda; Sina, Sedigheh; Koontz, Craig; Meigooni1, Ali S.

    2016-01-01

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross‐sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross‐sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in  125I and  103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code — MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low‐energy sources such as  125I and  103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for  103Pd and 10 cm for  125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for  192Ir and less than 1.2% for  137Cs between the three codes. PACS number(s): 87.56.bg PMID:27074460

  10. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    Science.gov (United States)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be

  11. FEM BASED PARAMETRIC DESIGN STUDY OF TIRE PROFILE USING DEDICATED CAD MODEL AND TRANSLATION CODE

    Directory of Open Access Journals (Sweden)

    Nikola Korunović

    2014-12-01

    Full Text Available In this paper a finite element method (FEM based parametric design study of the tire profile shape and belt width is presented. One of the main obstacles that similar studies have faced is how to change the finite element mesh after a modification of the tire geometry is performed. In order to overcome this problem, a new approach is proposed. It implies automatic update of the finite elements mesh, which follows the change of geometric design parameters on a dedicated CAD model. The mesh update is facilitated by an originally developed mapping and translation code. In this way, the performance of a large number of geometrically different tire design variations may be analyzed in a very short time. Although a pilot one, the presented study has also led to the improvement of the existing tire design.

  12. Source Code Verification for Embedded Systems using Prolog

    Directory of Open Access Journals (Sweden)

    Frank Flederer

    2017-01-01

    Full Text Available System relevant embedded software needs to be reliable and, therefore, well tested, especially for aerospace systems. A common technique to verify programs is the analysis of their abstract syntax tree (AST. Tree structures can be elegantly analyzed with the logic programming language Prolog. Moreover, Prolog offers further advantages for a thorough analysis: On the one hand, it natively provides versatile options to efficiently process tree or graph data structures. On the other hand, Prolog's non-determinism and backtracking eases tests of different variations of the program flow without big effort. A rule-based approach with Prolog allows to characterize the verification goals in a concise and declarative way. In this paper, we describe our approach to verify the source code of a flash file system with the help of Prolog. The flash file system is written in C++ and has been developed particularly for the use in satellites. We transform a given abstract syntax tree of C++ source code into Prolog facts and derive the call graph and the execution sequence (tree, which then are further tested against verification goals. The different program flow branching due to control structures is derived by backtracking as subtrees of the full execution sequence. Finally, these subtrees are verified in Prolog. We illustrate our approach with a case study, where we search for incorrect applications of semaphores in embedded software using the real-time operating system RODOS. We rely on computation tree logic (CTL and have designed an embedded domain specific language (DSL in Prolog to express the verification goals.

  13. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly; Pettersson, Gustav M.; Kostina, Victoria; Hassibi, Babak

    2017-01-01

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  14. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly

    2017-01-05

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel\\'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  15. Running the source term code package in Elebra MX-850

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.A.

    1988-01-01

    The source term package (STCP) is one of the main tools applied in calculations of behavior of fission products from nuclear power plants. It is a set of computer codes to assist the calculations of the radioactive materials leaving from the metallic containment of power reactors to the environment during a severe reactor accident. The original version of STCP runs in SDC computer systems, but as it has been written in FORTRAN 77, is possible run it in others systems such as IBM, Burroughs, Elebra, etc. The Elebra MX-8500 version of STCP contains 5 codes:March 3, Trapmelt, Tcca, Vanessa and Nava. The example presented in this report has taken into consideration a small LOCA accident into a PWR type reactor. (M.I.)

  16. Pharmacological profile of brain-derived neurotrophic factor (BDNF) splice variant translation using a novel drug screening assay: a "quantitative code".

    Science.gov (United States)

    Vaghi, Valentina; Polacchini, Alessio; Baj, Gabriele; Pinheiro, Vera L M; Vicario, Annalisa; Tongiorgi, Enrico

    2014-10-03

    The neurotrophin brain-derived neurotrophic factor (BDNF) is a key regulator of neuronal development and plasticity. BDNF is a major pharmaceutical target in neurodevelopmental and psychiatric disorders. However, pharmacological modulation of this neurotrophin is challenging because BDNF is generated by multiple, alternatively spliced transcripts with different 5'- and 3'UTRs. Each BDNF mRNA variant is transcribed independently, but translation regulation is unknown. To evaluate the translatability of BDNF transcripts, we developed an in vitro luciferase assay in human neuroblastoma cells. In unstimulated cells, each BDNF 5'- and 3'UTR determined a different basal translation level of the luciferase reporter gene. However, constructs with either a 5'UTR or a 3'UTR alone showed poor translation modulation by BDNF, KCl, dihydroxyphenylglycine, AMPA, NMDA, dopamine, acetylcholine, norepinephrine, or serotonin. Constructs consisting of the luciferase reporter gene flanked by the 5'UTR of one of the most abundant BDNF transcripts in the brain (exons 1, 2c, 4, and 6) and the long 3'UTR responded selectively to stimulation with the different receptor agonists, and only transcripts 2c and 6 were increased by the antidepressants desipramine and mirtazapine. We propose that BDNF mRNA variants represent "a quantitative code" for regulated expression of the protein. Thus, to discriminate the efficacy of drugs in stimulating BDNF synthesis, it is appropriate to use variant-specific in vitro screening tests. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.

  17. EQUIVALENCE VERSUS NON-EQUIVALENCE IN ECONOMIC TRANSLATION

    Directory of Open Access Journals (Sweden)

    Cristina, Chifane

    2012-01-01

    Full Text Available This paper aims at highlighting the fact that “equivalence” represents a concept worth revisiting and detailing upon when tackling the translation process of economic texts both from English into Romanian and from Romanian into English. Far from being exhaustive, our analysis will focus upon the problems arising from the lack of equivalence at the word level. Consequently, relevant examples from the economic field will be provided to account for the following types of non-equivalence at word level: culturespecific concepts; the source language concept is not lexicalised in the target language; the source language word is semantically complex; differences in physical and interpersonal perspective; differences in expressive meaning; differences in form; differences in frequency and purpose of using specific forms and the use of loan words in the source text. Likewise, we shall illustrate a number of translation strategies necessary to deal with the afore-mentioned cases of non-equivalence: translation by a more general word (superordinate; translation by a more neutral/less expressive word; translation by cultural substitution; translation using a loan word or loan word plus explanation; translation by paraphrase using a related word; translation by paraphrase using unrelated words; translation by omission and translation by illustration.

  18. An Efficient Method for Verifying Gyrokinetic Microstability Codes

    Science.gov (United States)

    Bravenec, R.; Candy, J.; Dorland, W.; Holland, C.

    2009-11-01

    Benchmarks for gyrokinetic microstability codes can be developed through successful ``apples-to-apples'' comparisons among them. Unlike previous efforts, we perform the comparisons for actual discharges, rendering the verification efforts relevant to existing experiments and future devices (ITER). The process requires i) assembling the experimental analyses at multiple times, radii, discharges, and devices, ii) creating the input files ensuring that the input parameters are faithfully translated code-to-code, iii) running the codes, and iv) comparing the results, all in an organized fashion. The purpose of this work is to automate this process as much as possible: At present, a python routine is used to generate and organize GYRO input files from TRANSP or ONETWO analyses. Another routine translates the GYRO input files into GS2 input files. (Translation software for other codes has not yet been written.) Other python codes submit the multiple GYRO and GS2 jobs, organize the results, and collect them into a table suitable for plotting. (These separate python routines could easily be consolidated.) An example of the process -- a linear comparison between GYRO and GS2 for a DIII-D discharge at multiple radii -- will be presented.

  19. Code of practice for the use of sealed radioactive sources in borehole logging (1998)

    International Nuclear Information System (INIS)

    1989-12-01

    The purpose of this code is to establish working practices, procedures and protective measures which will aid in keeping doses, arising from the use of borehole logging equipment containing sealed radioactive sources, to as low as reasonably achievable and to ensure that the dose-equivalent limits specified in the National Health and Medical Research Council s radiation protection standards, are not exceeded. This code applies to all situations and practices where a sealed radioactive source or sources are used through wireline logging for investigating the physical properties of the geological sequence, or any fluids contained in the geological sequence, or the properties of the borehole itself, whether casing, mudcake or borehole fluids. The radiation protection standards specify dose-equivalent limits for two categories: radiation workers and members of the public. 3 refs., tabs., ills

  20. CACTI: free, open-source software for the sequential coding of behavioral interactions.

    Science.gov (United States)

    Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B

    2012-01-01

    The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.

  1. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    Science.gov (United States)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  2. Coding for Electronic Mail

    Science.gov (United States)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  3. Automated Source Code Analysis to Identify and Remove Software Security Vulnerabilities: Case Studies on Java Programs

    OpenAIRE

    Natarajan Meghanathan

    2013-01-01

    The high-level contribution of this paper is to illustrate the development of generic solution strategies to remove software security vulnerabilities that could be identified using automated tools for source code analysis on software programs (developed in Java). We use the Source Code Analyzer and Audit Workbench automated tools, developed by HP Fortify Inc., for our testing purposes. We present case studies involving a file writer program embedded with features for password validation, and ...

  4. Translation Method and Computer Programme for Assisting the Same

    DEFF Research Database (Denmark)

    2013-01-01

    The present invention relates to a translation method comprising the steps of: a translator speaking a translation of a written source text in a target language, an automatic speech recognition system converting the spoken translation into a set of phone and word hypotheses in the target language......, a machine translation system translating the written source text into a set of translations hypotheses in the target language, and an integration module combining the set of spoken word hypotheses and the set of machine translation hypotheses obtaining a text in the target language. Thereby obtaining...

  5. Word Translation Entropy

    DEFF Research Database (Denmark)

    Schaeffer, Moritz; Dragsted, Barbara; Hvelplund, Kristian Tangsgaard

    This study reports on an investigation into the relationship between the number of translation alternatives for a single word and eye movements on the source text. In addition, the effect of word order differences between source and target text on eye movements on the source text is studied. In p...

  6. From system requirements to source code: transitions in UML and RUP

    Directory of Open Access Journals (Sweden)

    Stanisław Wrycza

    2011-06-01

    Full Text Available There are many manuals explaining language specification among UML-related books. Only some of books mentioned concentrate on practical aspects of using the UML language in effective way using CASE tools and RUP. The current paper presents transitions from system requirements specification to structural source code, useful while developing an information system.

  7. Phonematic translation of Polish texts by the neural network

    International Nuclear Information System (INIS)

    Bielecki, A.; Podolak, I.T.; Wosiek, J.; Majkut, E.

    1996-01-01

    Using the back propagation algorithm, we have trained the feed forward neural network to pronounce Polish language, more precisely to translate Polish text into its phonematic counterpart. Depending on the input coding and network architecture, 88%-95% translation efficiency was achieved. (author)

  8. Source coherence impairments in a direct detection direct sequence optical code-division multiple-access system.

    Science.gov (United States)

    Fsaifes, Ihsan; Lepers, Catherine; Lourdiane, Mounia; Gallion, Philippe; Beugin, Vincent; Guignard, Philippe

    2007-02-01

    We demonstrate that direct sequence optical code- division multiple-access (DS-OCDMA) encoders and decoders using sampled fiber Bragg gratings (S-FBGs) behave as multipath interferometers. In that case, chip pulses of the prime sequence codes generated by spreading in time-coherent data pulses can result from multiple reflections in the interferometers that can superimpose within a chip time duration. We show that the autocorrelation function has to be considered as the sum of complex amplitudes of the combined chip as the laser source coherence time is much greater than the integration time of the photodetector. To reduce the sensitivity of the DS-OCDMA system to the coherence time of the laser source, we analyze the use of sparse and nonperiodic quadratic congruence and extended quadratic congruence codes.

  9. Source coherence impairments in a direct detection direct sequence optical code-division multiple-access system

    Science.gov (United States)

    Fsaifes, Ihsan; Lepers, Catherine; Lourdiane, Mounia; Gallion, Philippe; Beugin, Vincent; Guignard, Philippe

    2007-02-01

    We demonstrate that direct sequence optical code- division multiple-access (DS-OCDMA) encoders and decoders using sampled fiber Bragg gratings (S-FBGs) behave as multipath interferometers. In that case, chip pulses of the prime sequence codes generated by spreading in time-coherent data pulses can result from multiple reflections in the interferometers that can superimpose within a chip time duration. We show that the autocorrelation function has to be considered as the sum of complex amplitudes of the combined chip as the laser source coherence time is much greater than the integration time of the photodetector. To reduce the sensitivity of the DS-OCDMA system to the coherence time of the laser source, we analyze the use of sparse and nonperiodic quadratic congruence and extended quadratic congruence codes.

  10. RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations

    Science.gov (United States)

    Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy

    RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.

  11. The long non-coding RNA GAS5 cooperates with the eukaryotic translation initiation factor 4E to regulate c-Myc translation.

    Directory of Open Access Journals (Sweden)

    Guangzhen Hu

    Full Text Available Long noncoding RNAs (lncRNAs are important regulators of transcription; however, their involvement in protein translation is not well known. Here we explored whether the lncRNA GAS5 is associated with translation initiation machinery and regulates translation. GAS5 was enriched with eukaryotic translation initiation factor-4E (eIF4E in an RNA-immunoprecipitation assay using lymphoma cell lines. We identified two RNA binding motifs within eIF4E protein and the deletion of each motif inhibited the binding of GAS5 with eIF4E. To confirm the role of GAS5 in translation regulation, GAS5 siRNA and in vitro transcribed GAS5 RNA were used to knock down or overexpress GAS5, respectively. GAS5 siRNA had no effect on global protein translation but did specifically increase c-Myc protein level without an effect on c-Myc mRNA. The mechanism of this increase in c-Myc protein was enhanced association of c-Myc mRNA with the polysome without any effect on protein stability. In contrast, overexpression of in vitro transcribed GAS5 RNA suppressed c-Myc protein without affecting c-Myc mRNA. Interestingly, GAS5 was found to be bound with c-Myc mRNA, suggesting that GAS5 regulates c-Myc translation through lncRNA-mRNA interaction. Our findings have uncovered a role of GAS5 lncRNA in translation regulation through its interactions with eIF4E and c-Myc mRNA.

  12. Automated Narratives and Journalistic Text Generation: The Lead Organization Structure Translated into Code.

    Directory of Open Access Journals (Sweden)

    Márcio Carneiro dos Santos

    2016-07-01

    Full Text Available It describes the experiment of building a software capable of generating leads and newspaper titles in an automated fashion from information obtained from the Internet. The theoretical possibility Lage already provided by the end of last century is based on relatively rigid and simple structure of this type of story construction, which facilitates the representation or translation of its syntax in terms of instructions that the computer can execute. The paper also discusses the relationship between society, technique and technology, making a brief history of the introduction of digital solutions in newsrooms and their impacts. The development was done with the Python programming language and NLTK- Natural Language Toolkit library - and used the results of the Brazilian Soccer Championship 2013 published on an internet portal as a data source.

  13. Coded moderator approach for fast neutron source detection and localization at standoff

    Energy Technology Data Exchange (ETDEWEB)

    Littell, Jennifer [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Lukosi, Eric, E-mail: elukosi@utk.edu [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Institute for Nuclear Security, University of Tennessee, 1640 Cumberland Avenue, Knoxville, TN 37996 (United States); Hayward, Jason; Milburn, Robert; Rowan, Allen [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States)

    2015-06-01

    Considering the need for directional sensing at standoff for some security applications and scenarios where a neutron source may be shielded by high Z material that nearly eliminates the source gamma flux, this work focuses on investigating the feasibility of using thermal neutron sensitive boron straw detectors for fast neutron source detection and localization. We utilized MCNPX simulations to demonstrate that, through surrounding the boron straw detectors by a HDPE coded moderator, a source-detector orientation-specific response enables potential 1D source localization in a high neutron detection efficiency design. An initial test algorithm has been developed in order to confirm the viability of this detector system's localization capabilities which resulted in identification of a 1 MeV neutron source with a strength equivalent to 8 kg WGPu at 50 m standoff within ±11°.

  14. System and method for integrating and accessing multiple data sources within a data warehouse architecture

    Science.gov (United States)

    Musick, Charles R [Castro Valley, CA; Critchlow, Terence [Livermore, CA; Ganesh, Madhaven [San Jose, CA; Slezak, Tom [Livermore, CA; Fidelis, Krzysztof [Brentwood, CA

    2006-12-19

    A system and method is disclosed for integrating and accessing multiple data sources within a data warehouse architecture. The metadata formed by the present method provide a way to declaratively present domain specific knowledge, obtained by analyzing data sources, in a consistent and useable way. Four types of information are represented by the metadata: abstract concepts, databases, transformations and mappings. A mediator generator automatically generates data management computer code based on the metadata. The resulting code defines a translation library and a mediator class. The translation library provides a data representation for domain specific knowledge represented in a data warehouse, including "get" and "set" methods for attributes that call transformation methods and derive a value of an attribute if it is missing. The mediator class defines methods that take "distinguished" high-level objects as input and traverse their data structures and enter information into the data warehouse.

  15. Using Peephole Optimization on Intermediate Code

    NARCIS (Netherlands)

    Tanenbaum, A.S.; van Staveren, H.; Stevenson, J.W.

    1982-01-01

    Many portable compilers generate an intermediate code that is subsequently translated into the target machine's assembly language. In this paper a stack-machine-based intermediate code suitable for algebraic languages (e.g., PASCAL, C, FORTRAN) and most byte-addressed mini- and microcomputers is

  16. Translation Analysis on Civil Engineering Text Produced by Machine Translator

    Directory of Open Access Journals (Sweden)

    Sutopo Anam

    2018-01-01

    Full Text Available Translation is extremely needed in communication since people have serious problem in the language used. Translation activity is done by the person in charge for translating the material. Translation activity is also able to be done by machine. It is called machine translation, reflected in the programs developed by programmer. One of them is Transtool. Many people used Transtool for helping them in solving the problem related with translation activities. This paper wants to deliver how important is the Transtool program, how effective is Transtool program and how is the function of Transtool for human business. This study applies qualitative research. The sources of data were document and informant. This study used documentation and in dept-interviewing as the techniques for collecting data. The collected data were analyzed by using interactive analysis. The results of the study show that, first; Transtool program is helpful for people in translating the civil engineering text and it functions as the aid or helper, second; the working of Transtool software program is effective enough and third; the result of translation produced by Transtool is good for short and simple sentences and not readable, not understandable and not accurate for long sentences (compound, complex and compound complex thought the result is informative. The translated material must be edited by the professional translator.

  17. Translation Analysis on Civil Engineering Text Produced by Machine Translator

    Science.gov (United States)

    Sutopo, Anam

    2018-02-01

    Translation is extremely needed in communication since people have serious problem in the language used. Translation activity is done by the person in charge for translating the material. Translation activity is also able to be done by machine. It is called machine translation, reflected in the programs developed by programmer. One of them is Transtool. Many people used Transtool for helping them in solving the problem related with translation activities. This paper wants to deliver how important is the Transtool program, how effective is Transtool program and how is the function of Transtool for human business. This study applies qualitative research. The sources of data were document and informant. This study used documentation and in dept-interviewing as the techniques for collecting data. The collected data were analyzed by using interactive analysis. The results of the study show that, first; Transtool program is helpful for people in translating the civil engineering text and it functions as the aid or helper, second; the working of Transtool software program is effective enough and third; the result of translation produced by Transtool is good for short and simple sentences and not readable, not understandable and not accurate for long sentences (compound, complex and compound complex) thought the result is informative. The translated material must be edited by the professional translator.

  18. Pauses by Student and Professional Translators in Translation Process

    Directory of Open Access Journals (Sweden)

    Rusdi Noor Rosa

    2018-01-01

    Full Text Available Translation as a process of meaning making activity requires a cognitive process one of which is realized in a pause, a temporary stop or a break indicating doing other than typing activities in a certain period of translation process. Scholars agree that pauses are an indicator of cognitive process without which there will never be any translation practices. Despite such agreement, pauses are debatable as well, either in terms of their length or in terms of the activities managed by a translator while taking pauses. This study, in particular, aims at finding out how student translators and professional translators managed the pauses in a translation process. This was a descriptive research taking two student translators and two professional translators as the participants who were asked to translate a text from English into bahasa Indonesia. The source text (ST was a historical recount text entitled ‘Early History of Yellowstone National Park’ downloaded from http://www.nezperce.com/yelpark9.html composed of 230-word long from English into bahasa Indonesia. The data were collected using Translog protocols, think aloud protocols (TAPs and screen recording. Based on the data analysis, it was found that student translators took the longest pauses in the drafting phase spent to solve the problems related to finding out the right equivalent for the ST words or terms and to solve the difficulties encountered in encoding their ST understanding in the TL; meanwhile, professional translators took the longest pauses in the pos-drafting phase spent to ensure whether their TT had been natural and whether their TT had corresponded to the prevailing grammatical rules of the TL.

  19. Source Code Stylometry Improvements in Python

    Science.gov (United States)

    2017-12-14

    grant (Caliskan-Islam et al. 2015) ............. 1 Fig. 2 Corresponding abstract syntax tree from de-anonymizing programmers’ paper (Caliskan-Islam et...person can be identified via their handwriting or an author identified by their style or prose, programmers can be identified by their code...Provided a labelled training set of code samples (example in Fig. 1), the techniques used in stylometry can identify the author of a piece of code or even

  20. PCI: A PATRAN-NASTRAN model translator

    Science.gov (United States)

    Sheerer, T. J.

    1990-01-01

    The amount of programming required to develop a PATRAN-NASTRAN translator was surprisingly small. The approach taken produced a highly flexible translator comparable with the PATNAS translator and superior to the PATCOS translator. The coding required varied from around ten lines for a shell element to around thirty for a bar element, and the time required to add a feature to the program is typically less than an hour. The use of a lookup table for element names makes the translator also applicable to other versions of NASTRAN. The saving in time as a result of using PDA's Gateway utilities was considerable. During the writing of the program it became apparent that, with a somewhat more complex structure, it would be possible to extend the element data file to contain all data required to define the translation from PATRAN to NASTRAN by mapping of data between formats. Similar data files on property, material and grid formats would produce a completely universal translator from PATRAN to any FEA program, or indeed any CAE system.

  1. Daisaku Ikeda and the Culture of Translation

    Science.gov (United States)

    Gebert, Andrew

    2012-01-01

    Although not functionally multilingual or a translator himself, Daisaku Ikeda has been deeply involved in translation processes, both as a reader and as someone who has produced texts for translation into various languages. This article examines two sources of influence shaping Ikeda's attitude toward translation culture: the flourishing culture…

  2. Cultural Interchangeability? Culture-Specific Items in Translation

    Directory of Open Access Journals (Sweden)

    Ajtony Zsuzsanna

    2016-12-01

    Full Text Available This paper summarizes the results of the translation work carried out within an international project aiming to develop the language skills of staff working in hotel and catering services. As the topics touched upon in the English source texts are related to several European cultures, these cultural differences bring about several challenges related to the translation of realia, or culture-specific items (CSIs. In the first part of the paper, a series of translation strategies for rendering source-language CSIs into the target language are enlisted, while the second part presents the main strategies employed in the prepared translations.

  3. Adaptable Value-Set Analysis for Low-Level Code

    OpenAIRE

    Brauer, Jörg; Hansen, René Rydhof; Kowalewski, Stefan; Larsen, Kim G.; Olesen, Mads Chr.

    2012-01-01

    This paper presents a framework for binary code analysis that uses only SAT-based algorithms. Within the framework, incremental SAT solving is used to perform a form of weakly relational value-set analysis in a novel way, connecting the expressiveness of the value sets to computational complexity. Another key feature of our framework is that it translates the semantics of binary code into an intermediate representation. This allows for a straightforward translation of the program semantics in...

  4. The Crc global regulator inhibits the Pseudomonas putida pWW0 toluene/xylene assimilation pathway by repressing the translation of regulatory and structural genes.

    Science.gov (United States)

    Moreno, Renata; Fonseca, Pilar; Rojo, Fernando

    2010-08-06

    In Pseudomonas putida, the expression of the pWW0 plasmid genes for the toluene/xylene assimilation pathway (the TOL pathway) is subject to complex regulation in response to environmental and physiological signals. This includes strong inhibition via catabolite repression, elicited by the carbon sources that the cells prefer to hydrocarbons. The Crc protein, a global regulator that controls carbon flow in pseudomonads, has an important role in this inhibition. Crc is a translational repressor that regulates the TOL genes, but how it does this has remained unknown. This study reports that Crc binds to sites located at the translation initiation regions of the mRNAs coding for XylR and XylS, two specific transcription activators of the TOL genes. Unexpectedly, eight additional Crc binding sites were found overlapping the translation initiation sites of genes coding for several enzymes of the pathway, all encoded within two polycistronic mRNAs. Evidence is provided supporting the idea that these sites are functional. This implies that Crc can differentially modulate the expression of particular genes within polycistronic mRNAs. It is proposed that Crc controls TOL genes in two ways. First, Crc inhibits the translation of the XylR and XylS regulators, thereby reducing the transcription of all TOL pathway genes. Second, Crc inhibits the translation of specific structural genes of the pathway, acting mainly on proteins involved in the first steps of toluene assimilation. This ensures a rapid inhibitory response that reduces the expression of the toluene/xylene degradation proteins when preferred carbon sources become available.

  5. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    Science.gov (United States)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  6. Protein functional features are reflected in the patterns of mRNA translation speed.

    Science.gov (United States)

    López, Daniel; Pazos, Florencio

    2015-07-09

    The degeneracy of the genetic code makes it possible for the same amino acid string to be coded by different messenger RNA (mRNA) sequences. These "synonymous mRNAs" may differ largely in a number of aspects related to their overall translational efficiency, such as secondary structure content and availability of the encoded transfer RNAs (tRNAs). Consequently, they may render different yields of the translated polypeptides. These mRNA features related to translation efficiency are also playing a role locally, resulting in a non-uniform translation speed along the mRNA, which has been previously related to some protein structural features and also used to explain some dramatic effects of "silent" single-nucleotide-polymorphisms (SNPs). In this work we perform the first large scale analysis of the relationship between three experimental proxies of mRNA local translation efficiency and the local features of the corresponding encoded proteins. We found that a number of protein functional and structural features are reflected in the patterns of ribosome occupancy, secondary structure and tRNA availability along the mRNA. One or more of these proxies of translation speed have distinctive patterns around the mRNA regions coding for certain protein local features. In some cases the three patterns follow a similar trend. We also show specific examples where these patterns of translation speed point to the protein's important structural and functional features. This support the idea that the genome not only codes the protein functional features as sequences of amino acids, but also as subtle patterns of mRNA properties which, probably through local effects on the translation speed, have some consequence on the final polypeptide. These results open the possibility of predicting a protein's functional regions based on a single genomic sequence, and have implications for heterologous protein expression and fine-tuning protein function.

  7. Exploring theoretical functions of corpus data in teaching translation

    Directory of Open Access Journals (Sweden)

    Éric Poirier

    2016-04-01

    Full Text Available http://dx.doi.org/10.5007/2175-7968.2016v36nesp1p177 As language referential data banks, corpora are instrumental in the exploration of translation solutions in bilingual parallel texts or conventional usages of source or target language in monolingual general or specialized texts. These roles are firmly rooted in translation processes, from analysis and interpretation of source text to searching for an acceptable equivalent and integrating it into the production of the target text. Provided the creative and not the conservative way be taken, validation or adaptation of target text in accordance with conventional usages in the target language also benefits from corpora. Translation teaching is not exploiting this way of translating that is common practice in the professional translation markets around the world. Instead of showing what corpus tools can do to translation teaching, we start our analysis with a common issue within translation teaching and show how corpus data can help to resolve it in learning activities in translation courses. We suggest a corpus-driven model for the interpretation of ‘business’ as a term and as an item in complex terms based on source text pattern analysis. This methodology will make it possible for teachers to explain and justify interpretation rules that have been defined theoretically from corpus data. It will also help teachers to conceive and non-subjectively assess practical activities designed for learners of translation. Corpus data selected for the examples of rule-based interpretations provided in this paper have been compiled in a corpus-driven study (Poirier, 2015 on the translation of the noun ‘business’ in the field of specialized translation in business, economics, and finance from English to French. The corpus methodology and rule-based interpretation of senses can be generalized and applied in the definition of interpretation rules for other language pairs and other specialized simple and

  8. Exploring theoretical functions of corpus data in teaching translation

    Directory of Open Access Journals (Sweden)

    Éric Poirier

    2016-06-01

    Full Text Available As language referential data banks, corpora are instrumental in the exploration of translation solutions in bilingual parallel texts or conventional usages of source or target language in monolingual general or specialized texts. These roles are firmly rooted in translation processes, from analysis and interpretation of source text to searching for an acceptable equivalent and integrating it into the production of the target text. Provided the creative and not the conservative way be taken, validation or adaptation of target text in accordance with conventional usages in the target language also benefits from corpora. Translation teaching is not exploiting this way of translating that is common practice in the professional translation markets around the world. Instead of showing what corpus tools can do to translation teaching, we start our analysis with a common issue within translation teaching and show how corpus data can help to resolve it in learning activities in translation courses. We suggest a corpus-driven model for the interpretation of ‘business’ as a term and as an item in complex terms based on source text pattern analysis. This methodology will make it possible for teachers to explain and justify interpretation rules that have been defined theoretically from corpus data. It will also help teachers to conceive and non-subjectively assess practical activities designed for learners of translation. Corpus data selected for the examples of rule-based interpretations provided in this paper have been compiled in a corpus-driven study (Poirier, 2015 on the translation of the noun ‘business’ in the field of specialized translation in business, economics, and finance from English to French. The corpus methodology and rule-based interpretation of senses can be generalized and applied in the definition of interpretation rules for other language pairs and other specialized simple and complex terms. These works will encourage the

  9. CDC 1604-A translator from the SLANG autocode for the TRA computer

    International Nuclear Information System (INIS)

    Belyaev, A.V.

    1976-01-01

    A SLANG - TRA translator has been devised for faster, easier programing. The program is realized on a SDS-1604A computer, input data are read from 80 column punch cards and translated into the standard Hollerith 026 code. Programs are processed in batches. A SDS-1604A teletype enables the operator to control the translation. The translator makes it possible to evaluate program processing time. The translator's high speed simplifies program editing and saves manpower

  10. XSOR codes users manual

    International Nuclear Information System (INIS)

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  11. The evolution of the mitochondrial genetic code in arthropods revisited.

    Science.gov (United States)

    Abascal, Federico; Posada, David; Zardoya, Rafael

    2012-04-01

    A variant of the invertebrate mitochondrial genetic code was previously identified in arthropods (Abascal et al. 2006a, PLoS Biol 4:e127) in which, instead of translating the AGG codon as serine, as in other invertebrates, some arthropods translate AGG as lysine. Here, we revisit the evolution of the genetic code in arthropods taking into account that (1) the number of arthropod mitochondrial genomes sequenced has triplicated since the original findings were published; (2) the phylogeny of arthropods has been recently resolved with confidence for many groups; and (3) sophisticated probabilistic methods can be applied to analyze the evolution of the genetic code in arthropod mitochondria. According to our analyses, evolutionary shifts in the genetic code have been more common than previously inferred, with many taxonomic groups displaying two alternative codes. Ancestral character-state reconstruction using probabilistic methods confirmed that the arthropod ancestor most likely translated AGG as lysine. Point mutations at tRNA-Lys and tRNA-Ser correlated with the meaning of the AGG codon. In addition, we identified three variables (GC content, number of AGG codons, and taxonomic information) that best explain the use of each of the two alternative genetic codes.

  12. Working with corpora in the translation classroom

    Directory of Open Access Journals (Sweden)

    Ralph Krüger

    2012-10-01

    Full Text Available This article sets out to illustrate possible applications of electronic corpora in the translation classroom. Starting with a survey of corpus use within corpus-based translation studies, the didactic value of corpora in the translation classroom and their epistemic value in translation teaching and practice will be elaborated. A typology of translation practice-oriented corpora will be presented, and the use of corpora in translation will be positioned within two general models of translation competence. Special consideration will then be given to the design and application of so-called Do-it-yourself (DIY corpora, which are compiled ad hoc with the aim of completing a specific translation task. In this context, possible sources for retrieving corpus texts will be presented and evaluated and it will be argued that, owing to time and availability constraints in real-life translation, the Internet should be used as a major source of corpus data. After a brief discussion of possible Internet research techniques for targeted and quality-focused corpus compilation, the possible use of the Internet itself as a macro-corpus will be elaborated. The article concludes with a brief presentation of corpus use in translation teaching in the MA in Specialised Translation Programme offered at Cologne University of Applied Sciences, Germany.

  13. Bit rates in audio source coding

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.

    1992-01-01

    The goal is to introduce and solve the audio coding optimization problem. Psychoacoustic results such as masking and excitation pattern models are combined with results from rate distortion theory to formulate the audio coding optimization problem. The solution of the audio optimization problem is a

  14. Mamma Mia, A Singable Translation!

    Directory of Open Access Journals (Sweden)

    Andrej Stopar

    2016-06-01

    Full Text Available The article discusses and analyzes approaches to translating singable texts. It presents a linguistic (prosodic, lexical and structural analysis of the Slovenian translation of the musical Mamma Mia! The aim of the qualitative and quantitative study is to investigate the translation strategies used to produce a singable target text. The results of the analysis suggest that producing a prosodic match is a basic requirement, whereas the lexical, structural and/or poetic characteristics of the source text are subject to changes. Overall, the findings show that the function and the purpose of the translation play a crucial role in the prioritization of translation strategies.

  15. Genetic coding and gene expression - new Quadruplet genetic coding model

    Science.gov (United States)

    Shankar Singh, Rama

    2012-07-01

    Successful demonstration of human genome project has opened the door not only for developing personalized medicine and cure for genetic diseases, but it may also answer the complex and difficult question of the origin of life. It may lead to making 21st century, a century of Biological Sciences as well. Based on the central dogma of Biology, genetic codons in conjunction with tRNA play a key role in translating the RNA bases forming sequence of amino acids leading to a synthesized protein. This is the most critical step in synthesizing the right protein needed for personalized medicine and curing genetic diseases. So far, only triplet codons involving three bases of RNA, transcribed from DNA bases, have been used. Since this approach has several inconsistencies and limitations, even the promise of personalized medicine has not been realized. The new Quadruplet genetic coding model proposed and developed here involves all four RNA bases which in conjunction with tRNA will synthesize the right protein. The transcription and translation process used will be the same, but the Quadruplet codons will help overcome most of the inconsistencies and limitations of the triplet codes. Details of this new Quadruplet genetic coding model and its subsequent potential applications including relevance to the origin of life will be presented.

  16. Six-degree-of-freedom near-source seismic motions I: rotation-to-translation relations and synthetic examples

    Czech Academy of Sciences Publication Activity Database

    Brokešová, J.; Málek, Jiří

    2015-01-01

    Roč. 19, č. 2 (2015), s. 491-509 ISSN 1383-4649 R&D Projects: GA ČR GAP210/10/0925; GA MŠk LM2010008; GA ČR GA15-02363S Institutional support: RVO:67985891 Keywords : seismic rotation * near-source region * rotation-to-translation relations * numerical simulations * S-wave velocity Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 1.550, year: 2015

  17. COMPASS: A source term code for investigating capillary barrier performance

    International Nuclear Information System (INIS)

    Zhou, Wei; Apted, J.J.

    1996-01-01

    A computer code COMPASS based on compartment model approach is developed to calculate the near-field source term of the High-Level-Waste repository under unsaturated conditions. COMPASS is applied to evaluate the expected performance of Richard's (capillary) barriers as backfills to divert infiltrating groundwater at Yucca Mountain. Comparing the release rates of four typical nuclides with and without the Richard's barrier, it is shown that the Richard's barrier significantly decreases the peak release rates from the Engineered-Barrier-System (EBS) into the host rock

  18. Time-dependent anisotropic external sources in transient 3-D transport code TORT-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    This paper describes the implementation of a time-dependent distributed external source in TORT-TD by explicitly considering the external source in the ''fixed-source'' term of the implicitly time-discretised 3-D discrete ordinates transport equation. Anisotropy of the external source is represented by a spherical harmonics series expansion similar to the angular fluxes. The YALINA-Thermal subcritical assembly serves as a test case. The configuration with 280 fuel rods has been analysed with TORT-TD using cross sections in 18 energy groups and P1 scattering order generated by the KAPROS code system. Good agreement is achieved concerning the multiplication factor. The response of the system to an artificial time-dependent source consisting of two square-wave pulses demonstrates the time-dependent external source capability of TORT-TD. The result is physically plausible as judged from validation calculations. (orig.)

  19. Experimental benchmark of the NINJA code for application to the Linac4 H- ion source plasma

    Science.gov (United States)

    Briefi, S.; Mattei, S.; Rauner, D.; Lettry, J.; Tran, M. Q.; Fantz, U.

    2017-10-01

    For a dedicated performance optimization of negative hydrogen ion sources applied at particle accelerators, a detailed assessment of the plasma processes is required. Due to the compact design of these sources, diagnostic access is typically limited to optical emission spectroscopy yielding only line-of-sight integrated results. In order to allow for a spatially resolved investigation, the electromagnetic particle-in-cell Monte Carlo collision code NINJA has been developed for the Linac4 ion source at CERN. This code considers the RF field generated by the ICP coil as well as the external static magnetic fields and calculates self-consistently the resulting discharge properties. NINJA is benchmarked at the diagnostically well accessible lab experiment CHARLIE (Concept studies for Helicon Assisted RF Low pressure Ion sourcEs) at varying RF power and gas pressure. A good general agreement is observed between experiment and simulation although the simulated electron density trends for varying pressure and power as well as the absolute electron temperature values deviate slightly from the measured ones. This can be explained by the assumption of strong inductive coupling in NINJA, whereas the CHARLIE discharges show the characteristics of loosely coupled plasmas. For the Linac4 plasma, this assumption is valid. Accordingly, both the absolute values of the accessible plasma parameters and their trends for varying RF power agree well in measurement and simulation. At varying RF power, the H- current extracted from the Linac4 source peaks at 40 kW. For volume operation, this is perfectly reflected by assessing the processes in front of the extraction aperture based on the simulation results where the highest H- density is obtained for the same power level. In surface operation, the production of negative hydrogen ions at the converter surface can only be considered by specialized beam formation codes, which require plasma parameters as input. It has been demonstrated that

  20. Uncertainties in source term calculations generated by the ORIGEN2 computer code for Hanford Production Reactors

    International Nuclear Information System (INIS)

    Heeb, C.M.

    1991-03-01

    The ORIGEN2 computer code is the primary calculational tool for computing isotopic source terms for the Hanford Environmental Dose Reconstruction (HEDR) Project. The ORIGEN2 code computes the amounts of radionuclides that are created or remain in spent nuclear fuel after neutron irradiation and radioactive decay have occurred as a result of nuclear reactor operation. ORIGEN2 was chosen as the primary code for these calculations because it is widely used and accepted by the nuclear industry, both in the United States and the rest of the world. Its comprehensive library of over 1,600 nuclides includes any possible isotope of interest to the HEDR Project. It is important to evaluate the uncertainties expected from use of ORIGEN2 in the HEDR Project because these uncertainties may have a pivotal impact on the final accuracy and credibility of the results of the project. There are three primary sources of uncertainty in an ORIGEN2 calculation: basic nuclear data uncertainty in neutron cross sections, radioactive decay constants, energy per fission, and fission product yields; calculational uncertainty due to input data; and code uncertainties (i.e., numerical approximations, and neutron spectrum-averaged cross-section values from the code library). 15 refs., 5 figs., 5 tabs

  1. Text Coherence in Translation

    Science.gov (United States)

    Zheng, Yanping

    2009-01-01

    In the thesis a coherent text is defined as a continuity of senses of the outcome of combining concepts and relations into a network composed of knowledge space centered around main topics. And the author maintains that in order to obtain the coherence of a target language text from a source text during the process of translation, a translator can…

  2. History and theory of Scripture translations

    Directory of Open Access Journals (Sweden)

    Jean-Claude Loba-Mkole

    2008-01-01

    Full Text Available This article argues for the importance of Bible translations through its historical achievements and theoretical frames of reference. The missionary expansion of Christianity owes its very being to translations. The early Christian communities knew the Bible through the LXX translations while churches today still continue to use various translations. Translations shape Scripture interpretations, especially when a given interpretation depends on a particular translation. A particular interpretation can also influence a given translation. The article shows how translation theories have been developed to clarify and how the transaction source-target is culturally handled. The articles discuss some of these “theoretical frames”, namely the functional equivalence, relevance, literary functional equivalence and intercultural mediation. By means of a historical overview and a reflection on Bible translation theories the article aims to focus on the role of Africa in translation history.

  3. Implementation of inter-unit analysis for C and C++ languages in a source-based static code analyzer

    Directory of Open Access Journals (Sweden)

    A. V. Sidorin

    2015-01-01

    Full Text Available The proliferation of automated testing capabilities arises a need for thorough testing of large software systems, including system inter-component interfaces. The objective of this research is to build a method for inter-procedural inter-unit analysis, which allows us to analyse large and complex software systems including multi-architecture projects (like Android OS as well as to support complex assembly systems of projects. Since the selected Clang Static Analyzer uses source code directly as input data, we need to develop a special technique to enable inter-unit analysis for such analyzer. This problem is of special nature because of C and C++ language features that assume and encourage the separate compilation of project files. We describe the build and analysis system that was implemented around Clang Static Analyzer to enable inter-unit analysis and consider problems related to support of complex projects. We also consider the task of merging abstract source trees of translation units and its related problems such as handling conflicting definitions, complex build systems and complex projects support, including support for multi-architecture projects, with examples. We consider both issues related to language design and human-related mistakes (that may be intentional. We describe some heuristics that were used for this work to make the merging process faster. The developed system was tested using Android OS as the input to show it is applicable even for such complicated projects. This system does not depend on the inter-procedural analysis method and allows the arbitrary change of its algorithm.

  4. A Comparison of Source Code Plagiarism Detection Engines

    Science.gov (United States)

    Lancaster, Thomas; Culwin, Fintan

    2004-06-01

    Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and effective techniques are seen to involve tokenising student submissions then searching pairs of submissions for long common substrings, an example of what is defined to be a paired structural metric. Computing academics are recommended to use one of the two Web-based detection engines, MOSS and JPlag. It is shown that whilst detection is well established there are still places where further research would be useful, particularly where visual support of the investigation process is possible.

  5. Sensitivity analysis and benchmarking of the BLT low-level waste source term code

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1993-07-01

    To evaluate the source term for low-level waste disposal, a comprehensive model had been developed and incorporated into a computer code, called BLT (Breach-Leach-Transport) Since the release of the original version, many new features and improvements had also been added to the Leach model of the code. This report consists of two different studies based on the new version of the BLT code: (1) a series of verification/sensitivity tests; and (2) benchmarking of the BLT code using field data. Based on the results of the verification/sensitivity tests, the authors concluded that the new version represents a significant improvement and it is capable of providing more realistic simulations of the leaching process. Benchmarking work was carried out to provide a reasonable level of confidence in the model predictions. In this study, the experimentally measured release curves for nitrate, technetium-99 and tritium from the saltstone lysimeters operated by Savannah River Laboratory were used. The model results are observed to be in general agreement with the experimental data, within the acceptable limits of uncertainty

  6. Developing a Translator from C Programs to Data Flow Graphs Using RAISE

    DEFF Research Database (Denmark)

    Haxthausen, Anne Elisabeth

    1996-01-01

    Describes how a translator from a subset of C to data flow graphs has been formally developed using the RAISE (Rigorous Approach to Industrial Software Engineering) method and tools. In contrast to many development examples described in the literature, this development is not a case study......, but a real one, and it covers all development phases, including the code-generation phase. The translator is now one of the components of the LYCOS (LYngby CO-Synthesis) system, which is a software/hardware co-synthesis system under development at the Technical University of Denmark. The translator, together...... with the other components of LYCOS, provides a means for moving parts of C programs to dedicated hardware, thereby obtaining better performance. The translator was refined in steps, starting with an abstract specification and ending with a concrete specification from which C++ code was then automatically...

  7. Transferring communicative clues in translation

    OpenAIRE

    Navarro Errasti, María Pilar

    2001-01-01

    In this essay I make use of the category communicative clue, as defined by Gutt (1991/2000), to explain certain differences between an original work and its various translations. Communicative clues are very useful analytical devices that show nuances of meaning and style. In the source texts, they sometimes go unnoticed. But when a translation is done the translator may come across these features and must desirably transfer them. Very frequently, however, they are ignored. Here a particular ...

  8. THE CONCEPT OF FIDELITY IN COMICS TRANSLATION

    Directory of Open Access Journals (Sweden)

    Erico Assis

    2016-11-01

    Full Text Available The long-discussed – and frequently dismissed – concept of translation faithfulness or translation fidelity, though usually applied to literary texts, has its fair share of applications when considered for comics translation. In literary translation, non-linguistic portions such as illustrations are often considered addenda or “paratexts” relative to the main, linguistic text. Comics, by its turn, present a certain set of features which single them out as a form that demands a new concept of “text” and, therefore, of translation fidelity. The comic-reading process, as pertaining to cognitive apprehension, implies interpretative accords that differ from the ones in purely linguistic texts: each and every element of the comics page – non-linguistic (mainly imagetic signs, linguistic signs, panel borders, typography and such – are intertwined and should be perceived in regards to its spatial and topological relations. This approach to understanding comics is based on Groensteen (1999 and his concepts of arthrology, spatio-topia, page layout, breakdown and braiding. As for translation fidelity, we rely on authors such as Berman (1984, Guidere (2010 and Aubert (1993. On comics translation, Zanettin (2008, Rota (2008 and Yuste Frías (2010, 2011 are of particular interest. Based on various concepts of fidelity – supported by samples of translated comics with varied degrees of fidelity to the source text – we discuss the different grounds of source-text fidelity, target-reader fidelity and source-author fidelity in the following instances: linguistic sign fidelity, imagetic sign fidelity, spatio-topia fidelity, typographic fidelity and format fidelity.

  9. Shared Representations and the Translation Process

    DEFF Research Database (Denmark)

    Schaeffer, Moritz; Carl, Michael

    2015-01-01

    The purpose of the present chapter is to investigate automated processing during translation. We provide evidence from a translation priming study which suggests that translation involves activation of shared lexico-semantic and syntactical representations, i.e., the activation of features of both...... source and target language items which share one single cognitive representation. We argue that activation of shared representations facilitates automated processing. The chapter revises the literal translation hypothesis and the monitor model (Ivir 1981; Toury 1995; Tirkkonen-Condit 2005), and re...

  10. Shared Representations and the Translation Process

    DEFF Research Database (Denmark)

    Schaeffer, Moritz; Carl, Michael

    2013-01-01

    The purpose of the present paper is to investigate automated processing during translation. We provide evidence from a translation priming study which suggests that translation involves activation of shared lexico-semantic and syntactical representations, i.e., the activation of features of both...... source and target language items which share one single cognitive representation. We argue that activation of shared representations facilitates automated processing. The paper revises the literal translation hypothesis and the monitor model (Ivir 1981; Toury 1995; Tirkkonen-Condit 2005), and re...

  11. Review of the status of validation of the computer codes used in the severe accident source term reassessment study (BMI-2104)

    International Nuclear Information System (INIS)

    Kress, T.S.

    1985-04-01

    The determination of severe accident source terms must, by necessity it seems, rely heavily on the use of complex computer codes. Source term acceptability, therefore, rests on the assessed validity of such codes. Consequently, one element of NRC's recent efforts to reassess LWR severe accident source terms is to provide a review of the status of validation of the computer codes used in the reassessment. The results of this review is the subject of this document. The separate review documents compiled in this report were used as a resource along with the results of the BMI-2104 study by BCL and the QUEST study by SNL to arrive at a more-or-less independent appraisal of the status of source term modeling at this time

  12. OpenSWPC: an open-source integrated parallel simulation code for modeling seismic wave propagation in 3D heterogeneous viscoelastic media

    Science.gov (United States)

    Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi

    2017-07-01

    We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.

  13. New Source Term Model for the RESRAD-OFFSITE Code Version 3

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States); Gnanapragasam, Emmanuel [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Shih-Yew [Argonne National Lab. (ANL), Argonne, IL (United States)

    2013-06-01

    This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.

  14. Death of a dogma: eukaryotic mRNAs can code for more than one protein.

    Science.gov (United States)

    Mouilleron, Hélène; Delcourt, Vivian; Roucou, Xavier

    2016-01-08

    mRNAs carry the genetic information that is translated by ribosomes. The traditional view of a mature eukaryotic mRNA is a molecule with three main regions, the 5' UTR, the protein coding open reading frame (ORF) or coding sequence (CDS), and the 3' UTR. This concept assumes that ribosomes translate one ORF only, generally the longest one, and produce one protein. As a result, in the early days of genomics and bioinformatics, one CDS was associated with each protein-coding gene. This fundamental concept of a single CDS is being challenged by increasing experimental evidence indicating that annotated proteins are not the only proteins translated from mRNAs. In particular, mass spectrometry (MS)-based proteomics and ribosome profiling have detected productive translation of alternative open reading frames. In several cases, the alternative and annotated proteins interact. Thus, the expression of two or more proteins translated from the same mRNA may offer a mechanism to ensure the co-expression of proteins which have functional interactions. Translational mechanisms already described in eukaryotic cells indicate that the cellular machinery is able to translate different CDSs from a single viral or cellular mRNA. In addition to summarizing data showing that the protein coding potential of eukaryotic mRNAs has been underestimated, this review aims to challenge the single translated CDS dogma. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Translational control in plant antiviral immunity

    Directory of Open Access Journals (Sweden)

    João Paulo B. Machado

    Full Text Available Abstract Due to the limited coding capacity of viral genomes, plant viruses depend extensively on the host cell machinery to support the viral life cycle and, thereby, interact with a large number of host proteins during infection. Within this context, as plant viruses do not harbor translation-required components, they have developed several strategies to subvert the host protein synthesis machinery to produce rapidly and efficiently the viral proteins. As a countermeasure against infection, plants have evolved defense mechanisms that impair viral infections. Among them, the host-mediated translational suppression has been characterized as an efficient mean to restrict infection. To specifically suppress translation of viral mRNAs, plants can deploy susceptible recessive resistance genes, which encode translation initiation factors from the eIF4E and eIF4G family and are required for viral mRNA translation and multiplication. Additionally, recent evidence has demonstrated that, alternatively to the cleavage of viral RNA targets, host cells can suppress viral protein translation to silence viral RNA. Finally, a novel strategy of plant antiviral defense based on suppression of host global translation, which is mediated by the transmembrane immune receptor NIK1 (nuclear shuttle protein (NSP-Interacting Kinase1, is discussed in this review.

  16. Emergence of a code in the polymerization of amino acids along RNA templates.

    Directory of Open Access Journals (Sweden)

    Jean Lehmann

    2009-06-01

    Full Text Available The origin of the genetic code in the context of an RNA world is a major problem in the field of biophysical chemistry. In this paper, we describe how the polymerization of amino acids along RNA templates can be affected by the properties of both molecules. Considering a system without enzymes, in which the tRNAs (the translation adaptors are not loaded selectively with amino acids, we show that an elementary translation governed by a Michaelis-Menten type of kinetics can follow different polymerization regimes: random polymerization, homopolymerization and coded polymerization. The regime under which the system is running is set by the relative concentrations of the amino acids and the kinetic constants involved. We point out that the coding regime can naturally occur under prebiotic conditions. It generates partially coded proteins through a mechanism which is remarkably robust against non-specific interactions (mismatches between the adaptors and the RNA template. Features of the genetic code support the existence of this early translation system.

  17. Code of practice for the control and safe handling of radioactive sources used for therapeutic purposes (1988)

    International Nuclear Information System (INIS)

    1988-01-01

    This Code is intended as a guide to safe practices in the use of sealed and unsealed radioactive sources and in the management of patients being treated with them. It covers the procedures for the handling, preparation and use of radioactive sources, precautions to be taken for patients undergoing treatment, storage and transport of radioactive sources within a hospital or clinic, and routine testing of sealed sources [fr

  18. Prototype of interactive Web Maps: an approach based on open sources

    Directory of Open Access Journals (Sweden)

    Jürgen Philips

    2004-07-01

    Full Text Available To explore the potentialities available in the World Wide Web (WWW, a prototype with interactive Web map was elaborated using standardized codes and open sources, such as eXtensible Markup Language (XML, Scalable Vector Graphics (SVG, Document Object Model (DOM , script languages ECMAScript/JavaScript and “PHP: Hypertext Preprocessor”, and PostgreSQL and its extension, the PostGIS, to disseminate information related to the urban real estate register. Data from the City Hall of São José - Santa Catarina, were used, referring to Campinas district. Using Client/Server model, a prototype of a Web map with standardized codes and open sources was implemented, allowing a user to visualize Web maps using only the Adobe’s plug-in Viewer 3.0 in his/her browser. Aiming a good cartographic project for the Web, it was obeyed rules of graphical translation and was implemented different functionalities of interaction, like interactive legends, symbolization and dynamic scale. From the results, it can be recommended the use of using standardized codes and open sources in interactive Web mapping projects. It is understood that, with the use of Open Source code, in the public and private administration, the possibility of technological development is amplified, and consequently, a reduction with expenses in the acquisition of computer’s program. Besides, it stimulates the development of computer applications targeting specific demands and requirements.

  19. ANT: Software for Generating and Evaluating Degenerate Codons for Natural and Expanded Genetic Codes.

    Science.gov (United States)

    Engqvist, Martin K M; Nielsen, Jens

    2015-08-21

    The Ambiguous Nucleotide Tool (ANT) is a desktop application that generates and evaluates degenerate codons. Degenerate codons are used to represent DNA positions that have multiple possible nucleotide alternatives. This is useful for protein engineering and directed evolution, where primers specified with degenerate codons are used as a basis for generating libraries of protein sequences. ANT is intuitive and can be used in a graphical user interface or by interacting with the code through a defined application programming interface. ANT comes with full support for nonstandard, user-defined, or expanded genetic codes (translation tables), which is important because synthetic biology is being applied to an ever widening range of natural and engineered organisms. The Python source code for ANT is freely distributed so that it may be used without restriction, modified, and incorporated in other software or custom data pipelines.

  20. The MCU-RFFI Monte Carlo code for reactor design applications

    International Nuclear Information System (INIS)

    Gomin, E.A.; Maiorov, L.V.

    1995-01-01

    MCU-RFFI is a general-purpose, continuous-energy, general geometry Monte Carlo code for solving external source or criticality problems for neutron transport in the energy range of 20 MeV to 10 -5 eV. The main fields of MCU-RFFI applications are as follows: (a) nuclear data validation; (b) design calculations (space reactors and other); (c) verification of design codes. MCU-RFFI is also supplied with tools to check the accuracy of design codes. These tools permit the user to calculate: the few group parameters of reactor cells, including the diffusion coefficients defined in a variety of ways, reaction rates for various nuclei, energy and space bins, and the kinetic parameters of systems, taking into account delayed neutrons. Boundary conditions include vacuum, white and specular reflection, and the condition of translational symmetry. The criticals with the neutron leakage given by the buckling vector may be calculated by solving Benoist's problem. The curve of criticality coefficient dependence on buckling may be determined during the single code run and critical buckling may be determined. Double heterogeneous systems with fuel elements containing many thousands of spherical microcells can be solved

  1. caTIES: a grid based system for coding and retrieval of surgical pathology reports and tissue specimens in support of translational research.

    Science.gov (United States)

    Crowley, Rebecca S; Castine, Melissa; Mitchell, Kevin; Chavan, Girish; McSherry, Tara; Feldman, Michael

    2010-01-01

    The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)--an application that supports collaborative tissue banking and text mining by leveraging existing natural language processing methods and algorithms, grid communication and security frameworks, and query visualization methods. The system fills an important need for text-derived clinical data in translational research such as tissue-banking and clinical trials. The design of caTIES addresses three critical issues for informatics support of translational research: (1) federation of research data sources derived from clinical systems; (2) expressive graphical interfaces for concept-based text mining; and (3) regulatory and security model for supporting multi-center collaborative research. Implementation of the system at several Cancer Centers across the country is creating a potential network of caTIES repositories that could provide millions of de-identified clinical reports to users. The system provides an end-to-end application of medical natural language processing to support multi-institutional translational research programs.

  2. Fast decoder for local quantum codes using Groebner basis

    Science.gov (United States)

    Haah, Jeongwan

    2013-03-01

    Based on arXiv:1204.1063. A local translation-invariant quantum code has a description in terms of Laurent polynomials. As an application of this observation, we present a fast decoding algorithm for translation-invariant local quantum codes in any spatial dimensions using the straightforward division algorithm for multivariate polynomials. The running time is O (n log n) on average, or O (n2 log n) on worst cases, where n is the number of physical qubits. The algorithm improves a subroutine of the renormalization-group decoder by Bravyi and Haah (arXiv:1112.3252) in the translation-invariant case. This work is supported in part by the Insitute for Quantum Information and Matter, an NSF Physics Frontier Center, and the Korea Foundation for Advanced Studies.

  3. Development of an open-source web-based intervention for Brazilian smokers - Viva sem Tabaco.

    Science.gov (United States)

    Gomide, H P; Bernardino, H S; Richter, K; Martins, L F; Ronzani, T M

    2016-08-02

    Web-based interventions for smoking cessation available in Portuguese do not adhere to evidence-based treatment guidelines. Besides, all existing web-based interventions are built on proprietary platforms that developing countries often cannot afford. We aimed to describe the development of "Viva sem Tabaco", an open-source web-based intervention. The development of the intervention included the selection of content from evidence-based guidelines for smoking cessation, the design of the first layout, conduction of 2 focus groups to identify potential features, refinement of the layout based on focus groups and correction of content based on feedback provided by specialists on smoking cessation. At the end, we released the source-code and intervention on the Internet and translated it into Spanish and English. The intervention developed fills gaps in the information available in Portuguese and the lack of open-source interventions for smoking cessation. The open-source licensing format and its translation system may help researchers from different countries deploying evidence-based interventions for smoking cessation.

  4. CULTURAL TRANSFER IN THE TRANSLATIONS OF MEDIA ORGANIZATION WEBSITES: A DESCRIPTIVE ANALYSIS OF ARTICLES AND THEIR TURKISH TRANSLATIONS ON THE BBC WEBSITE

    Directory of Open Access Journals (Sweden)

    Özge Aksoy

    2017-04-01

    Full Text Available The websites of media organizations address to readers from many different languages and cultures. Each culture has its own specific values, habits and norms. Translators employ some translation strategies in order to transfer these culture specific items (hereinafter; CSIs from a source text (hereinafter; ST to a target text (hereinafter; TT. They are supposed to establish translations that are completely comprehensible for the target readers. In this study, the articles of the British Broadcast Company (hereinafter; the BBC that are translated by BBC Turkish Service translator and published in the link ‘Dergi’ are analysed based on Toury’s translational norms and Aixela’s classification for CSIs. It is designed with a qualitative method and supported with an interview to triangulate the data. The findings show that the translations are generally ‘acceptable’, that is, the translator has the tendency towards target culture according to Toury’ s translational norms. She mostly employs constitution strategies of Aixela to transfer CSIs and this indicates the general tendency of the translations to ‘be a representation of a source text’. However, the translator specifies the target readers of the link as ‘educated young population’ and this does not complicate the comprehensibility of the CSIs by the target readers.

  5. Predicting Translation Initiation Rates for Designing Synthetic Biology

    International Nuclear Information System (INIS)

    Reeve, Benjamin; Hargest, Thomas; Gilbert, Charlie; Ellis, Tom

    2014-01-01

    In synthetic biology, precise control over protein expression is required in order to construct functional biological systems. A core principle of the synthetic biology approach is a model-guided design and based on the biological understanding of the process, models of prokaryotic protein production have been described. Translation initiation rate is a rate-limiting step in protein production from mRNA and is dependent on the sequence of the 5′-untranslated region and the start of the coding sequence. Translation rate calculators are programs that estimate protein translation rates based on the sequence of these regions of an mRNA, and as protein expression is proportional to the rate of translation initiation, such calculators have been shown to give good approximations of protein expression levels. In this review, three currently available translation rate calculators developed for synthetic biology are considered, with limitations and possible future progress discussed.

  6. Translation and spaces of reading

    Directory of Open Access Journals (Sweden)

    Clive Scott

    2014-01-01

    Full Text Available The author discusses relations between the original and translation in terms of imaginary spaces. Target text is understood here as one of the possible images of the source text, from the perspective which could not be accessible to the original. In accordance with the concept presented here, artistic translation can be not so much reconstructed, as conceptually constructed, in the manner of a cubist object. Acts of creative reading are commented on by the author with examples of his own experimental translations from contemporary French poetry.

  7. Direct Profiling the Post-Translational Modification Codes of a Single Protein Immobilized on a Surface Using Cu-free Click Chemistry.

    Science.gov (United States)

    Kim, Kyung Lock; Park, Kyeng Min; Murray, James; Kim, Kimoon; Ryu, Sung Ho

    2018-05-23

    Combinatorial post-translational modifications (PTMs), which can serve as dynamic "molecular barcodes", have been proposed to regulate distinct protein functions. However, studies of combinatorial PTMs on single protein molecules have been hindered by a lack of suitable analytical methods. Here, we describe erasable single-molecule blotting (eSiMBlot) for combinatorial PTM profiling. This assay is performed in a highly multiplexed manner and leverages the benefits of covalent protein immobilization, cyclic probing with different antibodies, and single molecule fluorescence imaging. Especially, facile and efficient covalent immobilization on a surface using Cu-free click chemistry permits multiple rounds (>10) of antibody erasing/reprobing without loss of antigenicity. Moreover, cumulative detection of coregistered multiple data sets for immobilized single-epitope molecules, such as HA peptide, can be used to increase the antibody detection rate. Finally, eSiMBlot enables direct visualization and quantitative profiling of combinatorial PTM codes at the single-molecule level, as we demonstrate by revealing the novel phospho-codes of ligand-induced epidermal growth factor receptor. Thus, eSiMBlot provides an unprecedentedly simple, rapid, and versatile platform for analyzing the vast number of combinatorial PTMs in biological pathways.

  8. A proposed metamodel for the implementation of object oriented software through the automatic generation of source code

    Directory of Open Access Journals (Sweden)

    CARVALHO, J. S. C.

    2008-12-01

    Full Text Available During the development of software one of the most visible risks and perhaps the biggest implementation obstacle relates to the time management. All delivery deadlines software versions must be followed, but it is not always possible, sometimes due to delay in coding. This paper presents a metamodel for software implementation, which will rise to a development tool for automatic generation of source code, in order to make any development pattern transparent to the programmer, significantly reducing the time spent in coding artifacts that make up the software.

  9. Syntactic Variance and Priming Effects in Translation

    DEFF Research Database (Denmark)

    Bangalore, Srinivas; Behrens, Bergljot; Carl, Michael

    2016-01-01

    The present work investigates the relationship between syntactic variation and priming in translation. It is based on the claim that languages share a common cognitive network of neural activity. When the source and target languages are solicited in a translation context, this shared network can...... lead to facilitation effects, so-called priming effects. We suggest that priming is a default setting in translation, a special case of language use where source and target languages are constantly co-activated. Such priming effects are not restricted to lexical elements, but do also occur...... on the syntactic level. We tested these hypotheses with translation data from the TPR database, more specifically for three language pairs (English-German, English-Danish, and English-Spanish). Our results show that response times are shorter when syntactic structures are shared. The model explains this through...

  10. Dragon TIS Spotter: an Arabidopsis-derived predictor of translation initiation sites in plants.

    Science.gov (United States)

    Magana-Mora, Arturo; Ashoor, Haitham; Jankovic, Boris R; Kamau, Allan; Awara, Karim; Chowdhary, Rajesh; Archer, John A C; Bajic, Vladimir B

    2013-01-01

    In higher eukaryotes, the identification of translation initiation sites (TISs) has been focused on finding these signals in cDNA or mRNA sequences. Using Arabidopsis thaliana (A.t.) information, we developed a prediction tool for signals within genomic sequences of plants that correspond to TISs. Our tool requires only genome sequence, not expressed sequences. Its sensitivity/specificity is for A.t. (90.75%/92.2%), for Vitis vinifera (66.8%/94.4%) and for Populus trichocarpa (81.6%/94.4%), which suggests that our tool can be used in annotation of different plant genomes. We provide a list of features used in our model. Further study of these features may improve our understanding of mechanisms of the translation initiation. Our tool is implemented as an artificial neural network. It is available as a web-based tool and, together with the source code, the list of features, and data used for model development, is accessible at http://cbrc.kaust.edu.sa/dts.

  11. Modification and testing of the code POLLA

    International Nuclear Information System (INIS)

    Carlson, B.V.; Chalhoub, E.S.; Melnikoff, M.

    1985-01-01

    The implantation and testing of POLLA computer code which translates the paramters of solved resonance for low energy neutrons by Reich-Moore formalism into the equivalent Adler-Adler ones are discussed. The POLLA computer code was developed by Nuclear Data Center of Instituto de Estudos Avancados, in Brazil, to solve actinide resonance cross sections. (Author) [pt

  12. Translating Answers to Open-Ended Survey Questions in Cross-Cultural Research: A Case Study on the Interplay between Translation, Coding, and Analysis

    Science.gov (United States)

    Behr, Dorothée

    2015-01-01

    Open-ended probing questions in cross-cultural surveys help uncover equivalence problems in cross-cultural survey research. For languages that a project team does not understand, probe answers need to be translated into a common project language. This article presents a case study on translating open-ended, that is, narrative answers. It describes…

  13. Chronos sickness: digital reality in Duncan Jones’s Source Code

    Directory of Open Access Journals (Sweden)

    Marcia Tiemy Morita Kawamoto

    2017-01-01

    Full Text Available http://dx.doi.org/10.5007/2175-8026.2017v70n1p249 The advent of the digital technologies unquestionably affected the cinema. The indexical relation and realistic effect with the photographed world much praised by André Bazin and Roland Barthes is just one of the affected aspects. This article discusses cinema in light of the new digital possibilities, reflecting on Steven Shaviro’s consideration of “how a nonindexical realism might be possible” (63 and how in fact a new kind of reality, a digital one, might emerge in the science fiction film Source Code (2013 by Duncan Jones.

  14. Tension and Approximation in Poetic Translation

    Science.gov (United States)

    Al-Shabab, Omar A. S.; Baka, Farida H.

    2015-01-01

    Simple observation reveals that each language and each culture enjoys specific linguistic features and rhetorical traditions. In poetry translation difference and the resultant linguistic tension create a gap between Source Language and Target language, a gap that needs to be bridged by creating an approximation processed through the translator's…

  15. Coded aperture detector for high precision gamma-ray burst source locations

    International Nuclear Information System (INIS)

    Helmken, H.; Gorenstein, P.

    1977-01-01

    Coded aperture collimators in conjunction with position-sensitive detectors are very useful in the study of transient phenomenon because they combine broad field of view, high sensitivity, and an ability for precise source locations. Since the preceeding conference, a series of computer simulations of various detector designs have been carried out with the aid of a CDC 6400. Particular emphasis was placed on the development of a unit consisting of a one-dimensional random or periodic collimator in conjunction with a two-dimensional position-sensitive Xenon proportional counter. A configuration involving four of these units has been incorporated into the preliminary design study of the Transient Explorer (ATREX) satellite and are applicable to any SAS or HEAO type satellite mission. Results of this study, including detector response, fields of view, and source location precision, will be presented

  16. Mikhail Lermontov’s “The Demon”: Reverse Translation as a Source of Intertextuality

    Directory of Open Access Journals (Sweden)

    Ольга Станиславовна Чеснокова

    2015-12-01

    Full Text Available This article examines the English translation of “The Demon”, one of the masterpieces of the great Russian poet Mikhail Yu. Lermontov, in a version done by Avril Pyman (born 1930, a renowned British scholar in the field of Slavic studies. The intertextual relationship between the original text and its translation is drawn throughout the parameters of the verse form, the plot, the artistic content, and the emotive resonance. Within the field of translation studies, our approach dissects the changes in genre and style, and it resorts to literal reverse translation as the most efficient device to trace the intertextuality between the original poem and its translation. Then, based on its findings we peruse the aesthetics in the rendition of the naming resources of the poem, mainly the naming of the Demon, of the poetic forms of speech manners, of the biblical anaphora, the alliteration, the colour epithets, and the Caucasus realia . The article determines that Avril Pyman’s translation serves as a prime example of a careful treatment of the meter, sense, and aesthetics of Lermontov’s masterpiece. The unavoidable meaning displacements in the translation did not alter the artistic message of the poem. Therefore Avril Pyman’s translation of “The Demon” is valued as the object of a meaningful aesthetic experience by the English reader.

  17. Supporting the Cybercrime Investigation Process: Effective Discrimination of Source Code Authors Based on Byte-Level Information

    Science.gov (United States)

    Frantzeskou, Georgia; Stamatatos, Efstathios; Gritzalis, Stefanos

    Source code authorship analysis is the particular field that attempts to identify the author of a computer program by treating each program as a linguistically analyzable entity. This is usually based on other undisputed program samples from the same author. There are several cases where the application of such a method could be of a major benefit, such as tracing the source of code left in the system after a cyber attack, authorship disputes, proof of authorship in court, etc. In this paper, we present our approach which is based on byte-level n-gram profiles and is an extension of a method that has been successfully applied to natural language text authorship attribution. We propose a simplified profile and a new similarity measure which is less complicated than the algorithm followed in text authorship attribution and it seems more suitable for source code identification since is better able to deal with very small training sets. Experiments were performed on two different data sets, one with programs written in C++ and the second with programs written in Java. Unlike the traditional language-dependent metrics used by previous studies, our approach can be applied to any programming language with no additional cost. The presented accuracy rates are much better than the best reported results for the same data sets.

  18. Free vs. Faithful – Towards Identifying the Relationship between Academic and Professional Criteria for Legal Translation

    Directory of Open Access Journals (Sweden)

    Mette Hjort-Pedersen

    2016-12-01

    Full Text Available For many years translation theorists have discussed the degree of translational freedom a legal translator has in rendering the meaning of a legal source text in a translation. Some believe that in order to achieve the communicative purpose, legal translators should focus on readability and bias their translation towards the target language community. Others insist that because of the special nature of legal texts and the sometimes binding force of legal translations, translators should stay as close to the source text as possible, i.e., bias their translation towards the source language community. But what is the relationship between these ‘academic’ observations and the way professional users and producers, i.e., lawyers and translators, think of legal translation? This article examines how actors on the Danish legal translation market view translational manoeuvres that result in a more or less close relationship between a legal source text and its translation, and also the translator’s power to decide what the nature of this relationship should be and how it should manifest itself in the translation.

  19. Development of Coupled Interface System between the FADAS Code and a Source-term Evaluation Code XSOR for CANDU Reactors

    International Nuclear Information System (INIS)

    Son, Han Seong; Song, Deok Yong; Kim, Ma Woong; Shin, Hyeong Ki; Lee, Sang Kyu; Kim, Hyun Koon

    2006-01-01

    An accident prevention system is essential to the industrial security of nuclear industry. Thus, the more effective accident prevention system will be helpful to promote safety culture as well as to acquire public acceptance for nuclear power industry. The FADAS(Following Accident Dose Assessment System) which is a part of the Computerized Advisory System for a Radiological Emergency (CARE) system in KINS is used for the prevention against nuclear accident. In order to enhance the FADAS system more effective for CANDU reactors, it is necessary to develop the various accident scenarios and reliable database of source terms. This study introduces the construction of the coupled interface system between the FADAS and the source-term evaluation code aimed to improve the applicability of the CANDU Integrated Safety Analysis System (CISAS) for CANDU reactors

  20. [Correlation of codon biases and potential secondary structures with mRNA translation efficiency in unicellular organisms].

    Science.gov (United States)

    Vladimirov, N V; Likhoshvaĭ, V A; Matushkin, Iu G

    2007-01-01

    Gene expression is known to correlate with degree of codon bias in many unicellular organisms. However, such correlation is absent in some organisms. Recently we demonstrated that inverted complementary repeats within coding DNA sequence must be considered for proper estimation of translation efficiency, since they may form secondary structures that obstruct ribosome movement. We have developed a program for estimation of potential coding DNA sequence expression in defined unicellular organism using its genome sequence. The program computes elongation efficiency index. Computation is based on estimation of coding DNA sequence elongation efficiency, taking into account three key factors: codon bias, average number of inverted complementary repeats, and free energy of potential stem-loop structures formed by the repeats. The influence of these factors on translation is numerically estimated. An optimal proportion of these factors is computed for each organism individually. Quantitative translational characteristics of 384 unicellular organisms (351 bacteria, 28 archaea, 5 eukaryota) have been computed using their annotated genomes from NCBI GenBank. Five potential evolutionary strategies of translational optimization have been determined among studied organisms. A considerable difference of preferred translational strategies between Bacteria and Archaea has been revealed. Significant correlations between elongation efficiency index and gene expression levels have been shown for two organisms (S. cerevisiae and H. pylori) using available microarray data. The proposed method allows to estimate numerically the coding DNA sequence translation efficiency and to optimize nucleotide composition of heterologous genes in unicellular organisms. http://www.mgs.bionet.nsc.ru/mgs/programs/eei-calculator/.

  1. Functional approaches in translation studies in Germany Functional approaches in translation studies in Germany

    Directory of Open Access Journals (Sweden)

    Paul Kussmaul

    2008-04-01

    Full Text Available In the early phase of translation studies in Germany, contrastive linguistics played a major role. I shall briefly describe this approach so that the functional approach will become clearer by contrast. Influenced by the representatives of stylistique comparée, Vinay/Darbelnet (1968 Wolfram Wilss, for instance, in his early work (1971, 1977 makes frequent use of the notion transposition (German “Ausdrucksverschiebung“, cf. also Catford’s (1965 term shift. As a whole, of course, Wilss’ work has a much broader scope. More recently, he has investigated the role of cognition (1988 and the various factors in translator behaviour (1996. Nevertheless, transposition is still a very important and useful notion in describing the translation process. The need for transpositions arises when there is no possibility of formal one-to-one correspondence between source and target-language structures. The basic idea is that whenever there is a need for transposition, we are faced with a translation problem. In the early phase of translation studies in Germany, contrastive linguistics played a major role. I shall briefly describe this approach so that the functional approach will become clearer by contrast. Influenced by the representatives of stylistique comparée, Vinay/Darbelnet (1968 Wolfram Wilss, for instance, in his early work (1971, 1977 makes frequent use of the notion transposition (German “Ausdrucksverschiebung“, cf. also Catford’s (1965 term shift. As a whole, of course, Wilss’ work has a much broader scope. More recently, he has investigated the role of cognition (1988 and the various factors in translator behaviour (1996. Nevertheless, transposition is still a very important and useful notion in describing the translation process. The need for transpositions arises when there is no possibility of formal one-to-one correspondence between source and target-language structures. The basic idea is that whenever there is a need for

  2. Survey of source code metrics for evaluating testability of object oriented systems

    OpenAIRE

    Shaheen , Muhammad Rabee; Du Bousquet , Lydie

    2010-01-01

    Software testing is costly in terms of time and funds. Testability is a software characteristic that aims at producing systems easy to test. Several metrics have been proposed to identify the testability weaknesses. But it is sometimes difficult to be convinced that those metrics are really related with testability. This article is a critical survey of the source-code based metrics proposed in the literature for object-oriented software testability. It underlines the necessity to provide test...

  3. Translation and Manipulation in Renaissance England

    Directory of Open Access Journals (Sweden)

    John Denton

    2016-12-01

    Full Text Available This supplementary volume to JEMS is part of an ongoing research project which began with a series of articles published by the author in the 1990s on the translation of Classical historical texts in Renaissance England. The methodology followed is that of Descriptive Translation Studies as developed by scholars such as Lefevere and Hermans with the accent on manipulation of the source text in line with the ideological stance of the translator and the need to ensure that readers of the translation received the ‘correct’ moral lessons.  Particular attention is devoted to a case study of the strategies followed in Thomas North’s domesticating English translation of Jacques Amyot’s French translation of Plutarch’s Lives and the consequences for Shakespeare’s perception of Plutarch.Biography John Denton was associate professor of English Language and Translation at the University of Florence until retirement in 2015. He  has published on contrastive analysis, history of translation (with special reference to the Early Modern England, religious discourse, literary and audiovisual translation

  4. NEACRP comparison of source term codes for the radiation protection assessment of transportation packages

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Locke, H.F.; Avery, A.F.

    1994-01-01

    The results for Problems 5 and 6 of the NEACRP code comparison as submitted by six participating countries are presented in summary. These problems concentrate on the prediction of the neutron and gamma-ray sources arising in fuel after a specified irradiation, the fuel being uranium oxide for problem 5 and a mixture of uranium and plutonium oxides for problem 6. In both problems the predicted neutron sources are in good agreement for all participants. For gamma rays, however, there are differences, largely due to the omission of bremsstrahlung in some calculations

  5. Effect of Telecollaboration on Translation of Culture-Bound Texts

    Directory of Open Access Journals (Sweden)

    Vahid Rafieyan

    2016-07-01

    Full Text Available One of the most problematic perspectives of translation phenomenon is the cultural gap between the source language and the target language (Yang, 2010. This gap can be ideally filled through telecollaboration which provides internationally dispersed language learners in parallel language classes with cost-effective access to, and engagement with, peers who are expert speakers of the language under study (Belz, 2005. To investigate the effect of telecollaboration on the quality of translation of culture-bound texts, the current study was conducted on 64 Iranian undergraduate students of English translation at a university in Iran. Instruments used in the study consisted of three texts containing news excerpts from Voice of America (VOA. The study consisted of three phases: 1 assessing quality of translation of culture-bound texts, 2 random assignment of participants to two groups: one merely receiving cultural instruction while the other being linked to native English speakers through LinkedIn alongside receiving cultural instruction, and 3 assessing quality of translation of culture-bound texts immediately and two months following treatment. The results of mixed between-within subjects analysis of variance revealed the significant positive effect of telecollaboration on developing quality of translation of culture-bound texts and sustaining the attained knowledge. The pedagogical implications of the findings suggested incorporation of cultural components of source language society into translation courses and providing opportunities for translation students to be exposed to authentic and intensive source language culture through telecollaboration.

  6. Detecting non-coding selective pressure in coding regions

    Directory of Open Access Journals (Sweden)

    Blanchette Mathieu

    2007-02-01

    Full Text Available Abstract Background Comparative genomics approaches, where orthologous DNA regions are compared and inter-species conserved regions are identified, have proven extremely powerful for identifying non-coding regulatory regions located in intergenic or intronic regions. However, non-coding functional elements can also be located within coding region, as is common for exonic splicing enhancers, some transcription factor binding sites, and RNA secondary structure elements affecting mRNA stability, localization, or translation. Since these functional elements are located in regions that are themselves highly conserved because they are coding for a protein, they generally escaped detection by comparative genomics approaches. Results We introduce a comparative genomics approach for detecting non-coding functional elements located within coding regions. Codon evolution is modeled as a mixture of codon substitution models, where each component of the mixture describes the evolution of codons under a specific type of coding selective pressure. We show how to compute the posterior distribution of the entropy and parsimony scores under this null model of codon evolution. The method is applied to a set of growth hormone 1 orthologous mRNA sequences and a known exonic splicing elements is detected. The analysis of a set of CORTBP2 orthologous genes reveals a region of several hundred base pairs under strong non-coding selective pressure whose function remains unknown. Conclusion Non-coding functional elements, in particular those involved in post-transcriptional regulation, are likely to be much more prevalent than is currently known. With the numerous genome sequencing projects underway, comparative genomics approaches like that proposed here are likely to become increasingly powerful at detecting such elements.

  7. Optimal source coding, removable noise elimination, and natural coordinate system construction for general vector sources using replicator neural networks

    Science.gov (United States)

    Hecht-Nielsen, Robert

    1997-04-01

    A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.

  8. How ‘direct’ can a direct translation be? Some perspectives from the realities of a new type of church Bible

    Directory of Open Access Journals (Sweden)

    Christo H.J. van der Merwe

    2016-07-01

    Keywords: Afrikaans Bibles; Bible translation; Biblical Hebrew; church Bible; code model; cognitive linguistics; cognitive semantics; communication model; communicative clue; direct translation; discourse marker; dynamic equivalent translation; functionalist tran

  9. Reflections on the Status of Hungarian Loanwords in Old Romanian Translations

    Directory of Open Access Journals (Sweden)

    Pál Enikő

    2015-03-01

    Full Text Available Translation has always been important for religion as a way of preaching God's word. The first Romanian translations of religious texts, including the first (although incomplete translation of the Bible, date from the sixteenth century. In this early period of Romanian writing, Romanian translators encountered several problems in conveying the meaning of these texts of a great complexity. Some of the difficulties were due to the source texts available in the epoch, others to the ideal of literal translation, to the principle of legitimacy or to the relatively poor development of Romanian language which limited the translators' options. The present study focuses on the causes and purposes for which lexical items of Hungarian origin interweave old Romanian translations. In this epoch, Hungarian influence was favoured by a complex of political, legal, administrative and socioculturel factors, sometimes even forced by these circumstances. On the one hand, given the premises of vivid contacts between Romanians and Hungarians in the regions where the old Romanian translations (or their originals can be located, a number of Hungarian loanwords of folk origin penetrated these texts. On the other hand, when using Hungarian sources, translators have imported useful source language caiques and loanwords, which have enriched Romanian language.

  10. Time-dependent anisotropic distributed source capability in transient 3-d transport code tort-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    The transient 3-D discrete ordinates transport code TORT-TD has been extended to account for time-dependent anisotropic distributed external sources. The extension aims at the simulation of the pulsed neutron source in the YALINA-Thermal subcritical assembly. Since feedback effects are not relevant in this zero-power configuration, this offers a unique opportunity to validate the time-dependent neutron kinetics of TORT-TD with experimental data. The extensions made in TORT-TD to incorporate a time-dependent anisotropic external source are described. The steady state of the YALINA-Thermal assembly and its response to an artificial square-wave source pulse sequence have been analysed with TORT-TD using pin-wise homogenised cross sections in 18 prompt energy groups with P 1 scattering order and 8 delayed neutron groups. The results demonstrate the applicability of TORT-TD to subcritical problems with a time-dependent external source. (authors)

  11. PC-assisted translation of photogrammetric papers

    Science.gov (United States)

    Güthner, Karlheinz; Peipe, Jürgen

    A PC-based system for machine translation of photogrammetric papers from the English into the German language and vice versa is described. The computer-assisted translating process is not intended to create a perfect interpretation of a text but to produce a rough rendering of the content of a paper. Starting with the original text, a continuous data flow is effected into the translated version by means of hardware (scanner, personal computer, printer) and software (OCR, translation, word processing, DTP). An essential component of the system is a photogrammetric microdictionary which is being established at present. It is based on several sources, including e.g. the ISPRS Multilingual Dictionary.

  12. Expectancy and Professional Norms in Legal Translation

    DEFF Research Database (Denmark)

    Faber, Dorrit; Hjort-Pedersen, Mette

    2013-01-01

    . These parameters focus on the degree to which the use of explicitation and implicitation is considered to influence meaning transfer, authentic English legal language and style, and the informative function of the translation in a defined translational situation. Based on Chesterman’s categorization of norms...... perceived norms influence the use of explicitation and implicitation. The findings are based on experiments involving Danish translators and legal experts who were asked to evaluate three different translations into English of the same Danish legal source text on a set of defined parameters...

  13. Numerical modeling of the Linac4 negative ion source extraction region by 3D PIC-MCC code ONIX

    CERN Document Server

    Mochalskyy, S; Minea, T; Lifschitz, AF; Schmitzer, C; Midttun, O; Steyaert, D

    2013-01-01

    At CERN, a high performance negative ion (NI) source is required for the 160 MeV H- linear accelerator Linac4. The source is planned to produce 80 mA of H- with an emittance of 0.25 mm mradN-RMS which is technically and scientifically very challenging. The optimization of the NI source requires a deep understanding of the underling physics concerning the production and extraction of the negative ions. The extraction mechanism from the negative ion source is complex involving a magnetic filter in order to cool down electrons’ temperature. The ONIX (Orsay Negative Ion eXtraction) code is used to address this problem. The ONIX is a selfconsistent 3D electrostatic code using Particles-in-Cell Monte Carlo Collisions (PIC-MCC) approach. It was written to handle the complex boundary conditions between plasma, source walls, and beam formation at the extraction hole. Both, the positive extraction potential (25kV) and the magnetic field map are taken from the experimental set-up, in construction at CERN. This contrib...

  14. EchoSeed Model 6733 Iodine-125 brachytherapy source: Improved dosimetric characterization using the MCNP5 Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S. [Center for Research in Medical Physics and Biomedical Engineering and Physics Unit, Radiotherapy Department, Shiraz University of Medical Sciences, Shiraz 71936-13311 (Iran, Islamic Republic of); Radiation Research Center and Medical Radiation Department, School of Engineering, Shiraz University, Shiraz 71936-13311 (Iran, Islamic Republic of); Comprehensive Cancer Center of Nevada, Las Vegas, Nevada 89169 (United States)

    2012-08-15

    This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.

  15. User's manual for BINIAC: A computer code to translate APET bins

    International Nuclear Information System (INIS)

    Gough, S.T.

    1994-03-01

    This report serves as the user's manual for the FORTRAN code BINIAC. BINIAC is a utility code designed to format the output from the Defense Waste Processing Facility (DWPF) Accident Progression Event Tree (APET) methodology. BINIAC inputs the accident progression bins from the APET methodology, converts the frequency from occurrences per hour to occurrences per year, sorts the progression bins, and converts the individual dimension character codes into facility attributes. Without the use of BINIAC, this process would be done manually at great time expense. BINIAC was written under the quality assurance control of IQ34 QAP IV-1, revision 0, section 4.1.4. Configuration control is established through the use of a proprietor and a cognizant users list

  16. Translate rotate scanning method for X-ray imaging

    International Nuclear Information System (INIS)

    Eberhard, J.W.; Kwog Cheong Tam.

    1990-01-01

    Rapid x-ray inspection of objects larger than an x-ray detector array is based on a translate rotate scanning motion of the object related to the fan beam source and detector. The scan for computerized tomography imaging is accomplished by rotating the object through 360 degrees at two or more positions relative to the source and detector array, in moving to another position the object is rotated and the object or source and detector are translated. A partial set of x-ray data is acquired at every position which are combined to obtain a full data set for complete image reconstruction. X-ray data for digital radiography imaging is acquired by scanning the object vertically at a first position at one view angle, rotating and translating the object relative to the source and detector to a second position, scanning vertically, and so on to cover the object field of view, and combining the partial data sets. (author)

  17. Developing a Collection of Composable Data Translation Software Units to Improve Efficiency and Reproducibility in Ecohydrologic Modeling Workflows

    Science.gov (United States)

    Olschanowsky, C.; Flores, A. N.; FitzGerald, K.; Masarik, M. T.; Rudisill, W. J.; Aguayo, M.

    2017-12-01

    Dynamic models of the spatiotemporal evolution of water, energy, and nutrient cycling are important tools to assess impacts of climate and other environmental changes on ecohydrologic systems. These models require spatiotemporally varying environmental forcings like precipitation, temperature, humidity, windspeed, and solar radiation. These input data originate from a variety of sources, including global and regional weather and climate models, global and regional reanalysis products, and geostatistically interpolated surface observations. Data translation measures, often subsetting in space and/or time and transforming and converting variable units, represent a seemingly mundane, but critical step in the application workflows. Translation steps can introduce errors, misrepresentations of data, slow execution time, and interrupt data provenance. We leverage a workflow that subsets a large regional dataset derived from the Weather Research and Forecasting (WRF) model and prepares inputs to the Parflow integrated hydrologic model to demonstrate the impact translation tool software quality on scientific workflow results and performance. We propose that such workflows will benefit from a community approved collection of data transformation components. The components should be self-contained composable units of code. This design pattern enables automated parallelization and software verification, improving performance and reliability. Ensuring that individual translation components are self-contained and target minute tasks increases reliability. The small code size of each component enables effective unit and regression testing. The components can be automatically composed for efficient execution. An efficient data translation framework should be written to minimize data movement. Composing components within a single streaming process reduces data movement. Each component will typically have a low arithmetic intensity, meaning that it requires about the same number of

  18. The coevolution of genes and genetic codes: Crick's frozen accident revisited.

    Science.gov (United States)

    Sella, Guy; Ardell, David H

    2006-09-01

    The standard genetic code is the nearly universal system for the translation of genes into proteins. The code exhibits two salient structural characteristics: it possesses a distinct organization that makes it extremely robust to errors in replication and translation, and it is highly redundant. The origin of these properties has intrigued researchers since the code was first discovered. One suggestion, which is the subject of this review, is that the code's organization is the outcome of the coevolution of genes and genetic codes. In 1968, Francis Crick explored the possible implications of coevolution at different stages of code evolution. Although he argues that coevolution was likely to influence the evolution of the code, he concludes that it falls short of explaining the organization of the code we see today. The recent application of mathematical modeling to study the effects of errors on the course of coevolution, suggests a different conclusion. It shows that coevolution readily generates genetic codes that are highly redundant and similar in their error-correcting organization to the standard code. We review this recent work and suggest that further affirmation of the role of coevolution can be attained by investigating the extent to which the outcome of coevolution is robust to other influences that were present during the evolution of the code.

  19. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  20. Hard real-time multibody simulations using ARM-based embedded systems

    Energy Technology Data Exchange (ETDEWEB)

    Pastorino, Roland, E-mail: roland.pastorino@kuleuven.be, E-mail: rpastorino@udc.es; Cosco, Francesco, E-mail: francesco.cosco@kuleuven.be; Naets, Frank, E-mail: frank.naets@kuleuven.be; Desmet, Wim, E-mail: wim.desmet@kuleuven.be [KU Leuven, PMA division, Department of Mechanical Engineering (Belgium); Cuadrado, Javier, E-mail: javicuad@cdf.udc.es [Universidad de La Coruña, Laboratorio de Ingeniería Mecánica (Spain)

    2016-05-15

    The real-time simulation of multibody models on embedded systems is of particular interest for controllers and observers such as model predictive controllers and state observers, which rely on a dynamic model of the process and are customarily executed in electronic control units. This work first identifies the software techniques and tools required to easily write efficient code for multibody models to be simulated on ARM-based embedded systems. Automatic Programming and Source Code Translation are the two techniques that were chosen to generate source code for multibody models in different programming languages. Automatic Programming is used to generate procedural code in an intermediate representation from an object-oriented library and Source Code Translation is used to translate the intermediate representation automatically to an interpreted language or to a compiled language for efficiency purposes. An implementation of these techniques is proposed. It is based on a Python template engine and AST tree walkers for Source Code Generation and on a model-driven translator for the Source Code Translation. The code is translated from a metalanguage to any of the following four programming languages: Python-Numpy, Matlab, C++-Armadillo, C++-Eigen. Two examples of multibody models were simulated: a four-bar linkage with multiple loops and a 3D vehicle steering system. The code for these examples has been generated and executed on two ARM-based single-board computers. Using compiled languages, both models could be simulated faster than real-time despite the low resources and performance of these embedded systems. Finally, the real-time performance of both models was evaluated when executed in hard real-time on Xenomai for both embedded systems. This work shows through measurements that Automatic Programming and Source Code Translation are valuable techniques to develop real-time multibody models to be used in embedded observers and controllers.

  1. Hard real-time multibody simulations using ARM-based embedded systems

    International Nuclear Information System (INIS)

    Pastorino, Roland; Cosco, Francesco; Naets, Frank; Desmet, Wim; Cuadrado, Javier

    2016-01-01

    The real-time simulation of multibody models on embedded systems is of particular interest for controllers and observers such as model predictive controllers and state observers, which rely on a dynamic model of the process and are customarily executed in electronic control units. This work first identifies the software techniques and tools required to easily write efficient code for multibody models to be simulated on ARM-based embedded systems. Automatic Programming and Source Code Translation are the two techniques that were chosen to generate source code for multibody models in different programming languages. Automatic Programming is used to generate procedural code in an intermediate representation from an object-oriented library and Source Code Translation is used to translate the intermediate representation automatically to an interpreted language or to a compiled language for efficiency purposes. An implementation of these techniques is proposed. It is based on a Python template engine and AST tree walkers for Source Code Generation and on a model-driven translator for the Source Code Translation. The code is translated from a metalanguage to any of the following four programming languages: Python-Numpy, Matlab, C++-Armadillo, C++-Eigen. Two examples of multibody models were simulated: a four-bar linkage with multiple loops and a 3D vehicle steering system. The code for these examples has been generated and executed on two ARM-based single-board computers. Using compiled languages, both models could be simulated faster than real-time despite the low resources and performance of these embedded systems. Finally, the real-time performance of both models was evaluated when executed in hard real-time on Xenomai for both embedded systems. This work shows through measurements that Automatic Programming and Source Code Translation are valuable techniques to develop real-time multibody models to be used in embedded observers and controllers.

  2. How to Verify and Manage the Translational Plagiarism?

    Science.gov (United States)

    Wiwanitkit, Viroj

    2016-01-01

    The use of Google translator as a tool for determining translational plagiarism is a big challenge. As noted, plagiarism of the original papers written in Macedonian and translated into other languages can be verified after computerised translation in other languages. Attempts to screen the translational plagiarism should be supported. The use of Google Translate tool might be helpful. Special focus should be on any non-English reference that might be the source of plagiarised material and non-English article that might translate from an original English article, which cannot be detected by simple plagiarism screening tool. It is a hard job for any journal to detect the complex translational plagiarism but the harder job might be how to effectively manage the case. PMID:27703588

  3. VERSIONS VERSUS BODIES: TRANSLATIONS IN THE MISSIONARY ENCOUNTER IN AMAZONIA

    OpenAIRE

    Vilaça, Aparecida

    2016-01-01

    Abstract This paper analyzes the two distinct concepts of translation at work in the encounter between the Amazonian Wari' and the New Tribes Mission evangelical missionaries, and the equivocations stemming from this difference. While the missionaries conceive translation as a process of converting meanings between languages, conceived as linguistic codes that exist independently of culture, for the Wari', in consonance with their perspectivist ontology, it is not language that differentiates...

  4. The Role of Localisation in Advertising Translation

    Institute of Scientific and Technical Information of China (English)

    ZHANG Ying

    2016-01-01

    Today, a growing number of international corporations have been seeking to boost their sales on the global scale. To achieve this, adverting, one of the most common way to stimulate consumption, is widely used in which translation is involved because of diverse languages. Thus, how to translate a source advertising in a target culture has a decisive influence on a compa-ny’s marketing. The aim of the target advertising is to sell products to the locals. In this sense, localisation plays a significant role in the translation of advertising. This essay will discuss the importance of localisation in the translation of advertising and analyse cases of English-Chinese advertising translation based on the purpose of advertising and advertising translation.

  5. SPEECH ACT OF ILTIFAT AND ITS INDONESIAN TRANSLATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    Zaka Al Farisi

    2015-01-01

    Full Text Available Abstract: Iltifat (shifting speech act is distinctive and considered unique style of Arabic. It has potential errors when it is translated into Indonesian. Therefore, translation of iltifat speech act into another language can be an important issue. The objective of the study is to know translation procedures/techniques and ideology required in dealing with iltifat speech act. This research is directed at translation as a cognitive product of a translator. The data used in the present study were the corpus of Koranic verses that contain iltifat speech act along with their translation. Data analysis typically used descriptive-evaluative method with content analysis model. The data source of this research consisted of the Koran and its translation. The purposive sampling technique was employed, with the sample of the iltifat speech act contained in the Koran. The results showed that more than 60% of iltifat speech act were translated by using literal procedure. The significant number of literal translation of the verses asserts that the Ministry of Religious Affairs tended to use literal method of translation. In other words, the Koran translation made by the Ministry of Religious Affairs tended to be oriented to the source language in dealing with iltifat speech act. The number of the literal procedure used shows a tendency of foreignization ideology. Transitional pronouns contained in the iltifat speech act can be clearly translated when thick translations were used in the form of description in parentheses. In this case, explanation can be a choice in translating iltifat speech act.

  6. The European source-term evaluation code ASTEC: status and applications, including CANDU plant applications

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Giordano, P.; Kissane, M.P.; Montanelli, T.; Schwinges, B.; Ganju, S.; Dickson, L.

    2004-01-01

    Research on light-water reactor severe accidents (SA) is still required in a limited number of areas in order to confirm accident-management plans. Thus, 49 European organizations have linked their SA research in a durable way through SARNET (Severe Accident Research and management NETwork), part of the European 6th Framework Programme. One goal of SARNET is to consolidate the integral code ASTEC (Accident Source Term Evaluation Code, developed by IRSN and GRS) as the European reference tool for safety studies; SARNET efforts include extending the application scope to reactor types other than PWR (including VVER) such as BWR and CANDU. ASTEC is used in IRSN's Probabilistic Safety Analysis level 2 of 900 MWe French PWRs. An earlier version of ASTEC's SOPHAEROS module, including improvements by AECL, is being validated as the Canadian Industry Standard Toolset code for FP-transport analysis in the CANDU Heat Transport System. Work with ASTEC has also been performed by Bhabha Atomic Research Centre, Mumbai, on IPHWR containment thermal hydraulics. (author)

  7. Quantifying Translation-Invariance in Convolutional Neural Networks

    OpenAIRE

    Kauderer-Abrams, Eric

    2017-01-01

    A fundamental problem in object recognition is the development of image representations that are invariant to common transformations such as translation, rotation, and small deformations. There are multiple hypotheses regarding the source of translation invariance in CNNs. One idea is that translation invariance is due to the increasing receptive field size of neurons in successive convolution layers. Another possibility is that invariance is due to the pooling operation. We develop a simple ...

  8. Effective knowledge management in translational medicine.

    Science.gov (United States)

    Szalma, Sándor; Koka, Venkata; Khasanova, Tatiana; Perakslis, Eric D

    2010-07-19

    The growing consensus that most valuable data source for biomedical discoveries is derived from human samples is clearly reflected in the growing number of translational medicine and translational sciences departments across pharma as well as academic and government supported initiatives such as Clinical and Translational Science Awards (CTSA) in the US and the Seventh Framework Programme (FP7) of EU with emphasis on translating research for human health. The pharmaceutical companies of Johnson and Johnson have established translational and biomarker departments and implemented an effective knowledge management framework including building a data warehouse and the associated data mining applications. The implemented resource is built from open source systems such as i2b2 and GenePattern. The system has been deployed across multiple therapeutic areas within the pharmaceutical companies of Johnson and Johnsons and being used actively to integrate and mine internal and public data to support drug discovery and development decisions such as indication selection and trial design in a translational medicine setting. Our results show that the established system allows scientist to quickly re-validate hypotheses or generate new ones with the use of an intuitive graphical interface. The implemented resource can serve as the basis of precompetitive sharing and mining of studies involving samples from human subjects thus enhancing our understanding of human biology and pathophysiology and ultimately leading to more effective treatment of diseases which represent unmet medical needs.

  9. Internal field probing of translating FRCs

    International Nuclear Information System (INIS)

    Armstrong, W.T.; Chrien, R.E.; Milroy, R.D.

    1984-11-01

    Magnetic field probes have been employed to study the internal field structure of Field-Reversed Configurations (FRCs) translating past the probes in the FRX-C/T device. Internal closed flux surfaces can be studied in this manner with minimal perturbation because of the rapid transit of the plasma (translation velocity v/sub z/ approx. 10 cm/μs). Data have been taken using a 5-mtorr-D 2 gas-puff mode of operation in the FRC source coil which yields an initial plasma density of approx. 1 x 10 15 cm -3 and x/sub s/ approx. 0.40. FRCs translate from the approx. 25 cm radius source coil into a 20 cm radius metal translation vessel. Of many translation conditions studied, the condition considered here is translation into a weak guide field resulting in expansion of the FRC to conditions of density approx. 3 x 10 14 and x/sub s/ approx. 0.7. The expected reversed B/sub z/ structure is observed. Evidence of island structure is also observed. Fluctuating levels of B/sub THETA/ are observed with amplitudes less than or equal to B 0 /3 and values of flux approx. 4 x the poloidal flux. Values of β on the separatrix of β/sub s/ approx. = 0.3 (indexed to the external field) are implied from the field measurements. This decrease of β/sub s/ with increased x/sub s/ is expected, and desirable for improved plasma confinement

  10. Effective knowledge management in translational medicine

    Directory of Open Access Journals (Sweden)

    Khasanova Tatiana

    2010-07-01

    Full Text Available Abstract Background The growing consensus that most valuable data source for biomedical discoveries is derived from human samples is clearly reflected in the growing number of translational medicine and translational sciences departments across pharma as well as academic and government supported initiatives such as Clinical and Translational Science Awards (CTSA in the US and the Seventh Framework Programme (FP7 of EU with emphasis on translating research for human health. Methods The pharmaceutical companies of Johnson and Johnson have established translational and biomarker departments and implemented an effective knowledge management framework including building a data warehouse and the associated data mining applications. The implemented resource is built from open source systems such as i2b2 and GenePattern. Results The system has been deployed across multiple therapeutic areas within the pharmaceutical companies of Johnson and Johnsons and being used actively to integrate and mine internal and public data to support drug discovery and development decisions such as indication selection and trial design in a translational medicine setting. Our results show that the established system allows scientist to quickly re-validate hypotheses or generate new ones with the use of an intuitive graphical interface. Conclusions The implemented resource can serve as the basis of precompetitive sharing and mining of studies involving samples from human subjects thus enhancing our understanding of human biology and pathophysiology and ultimately leading to more effective treatment of diseases which represent unmet medical needs.

  11. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  12. NAGRADATA. Code key. Geology

    International Nuclear Information System (INIS)

    Mueller, W.H.; Schneider, B.; Staeuble, J.

    1984-01-01

    This reference manual provides users of the NAGRADATA system with comprehensive keys to the coding/decoding of geological and technical information to be stored in or retreaved from the databank. Emphasis has been placed on input data coding. When data is retreaved the translation into plain language of stored coded information is done automatically by computer. Three keys each, list the complete set of currently defined codes for the NAGRADATA system, namely codes with appropriate definitions, arranged: 1. according to subject matter (thematically) 2. the codes listed alphabetically and 3. the definitions listed alphabetically. Additional explanation is provided for the proper application of the codes and the logic behind the creation of new codes to be used within the NAGRADATA system. NAGRADATA makes use of codes instead of plain language for data storage; this offers the following advantages: speed of data processing, mainly data retrieval, economies of storage memory requirements, the standardisation of terminology. The nature of this thesaurian type 'key to codes' makes it impossible to either establish a final form or to cover the entire spectrum of requirements. Therefore, this first issue of codes to NAGRADATA must be considered to represent the current state of progress of a living system and future editions will be issued in a loose leave ringbook system which can be updated by an organised (updating) service. (author)

  13. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  14. Euphemism vs explicitness: A corpus-based analysis of translated ...

    African Journals Online (AJOL)

    This article examines the governing initial norms, namely explicitness and euphemism in English source texts and Ndebele translations, focusing on how these norms influenced the strategies chosen by the Ndebele translators in the translation of taboo terms. In the article, a corpus-based approach is used to identify head ...

  15. Pseudo-realia in the Romanian Translations of Various Hungarian Institutions and in the Hungarian Translations of Romanian Public Administration Terms

    Directory of Open Access Journals (Sweden)

    Zopus Andras

    2016-12-01

    Full Text Available My presentation addresses an issue translators of Romanian–Hungarian legal and economic texts encounter almost day by day. Each field of translation is special in its kind, but translating legal/economic texts requires an especially accurate knowledge of the acts, laws, and concepts of both the source and target language since this is essential for the translated text to be really a quality, professional, and – last but not least – an intelligible one to the target-language audience, i.e. the customers.

  16. SETI-EC: SETI Encryption Code

    Science.gov (United States)

    Heller, René

    2018-03-01

    The SETI Encryption code, written in Python, creates a message for use in testing the decryptability of a simulated incoming interstellar message. The code uses images in a portable bit map (PBM) format, then writes the corresponding bits into the message, and finally returns both a PBM image and a text (TXT) file of the entire message. The natural constants (c, G, h) and the wavelength of the message are defined in the first few lines of the code, followed by the reading of the input files and their conversion into 757 strings of 359 bits to give one page. Each header of a page, i.e. the little-endian binary code translation of the tempo-spatial yardstick, is calculated and written on-the-fly for each page.

  17. SCRIC: a code dedicated to the detailed emission and absorption of heterogeneous NLTE plasmas; application to xenon EUV sources

    International Nuclear Information System (INIS)

    Gaufridy de Dortan, F. de

    2006-01-01

    Nearly all spectral opacity codes for LTE and NLTE plasmas rely on configurations approximate modelling or even supra-configurations modelling for mid Z plasmas. But in some cases, configurations interaction (either relativistic and non relativistic) induces dramatic changes in spectral shapes. We propose here a new detailed emissivity code with configuration mixing to allow for a realistic description of complex mid Z plasmas. A collisional radiative calculation. based on HULLAC precise energies and cross sections. determines the populations. Detailed emissivities and opacities are then calculated and radiative transfer equation is resolved for wide inhomogeneous plasmas. This code is able to cope rapidly with very large amount of atomic data. It is therefore possible to use complex hydrodynamic files even on personal computers in a very limited time. We used this code for comparison with Xenon EUV sources within the framework of nano-lithography developments. It appears that configurations mixing strongly shifts satellite lines and must be included in the description of these sources to enhance their efficiency. (author)

  18. Translation initiation in bacterial polysomes through ribosome loading on a standby site on a highly translated mRNA

    Science.gov (United States)

    Andreeva, Irena

    2018-01-01

    During translation, consecutive ribosomes load on an mRNA and form a polysome. The first ribosome binds to a single-stranded mRNA region and moves toward the start codon, unwinding potential mRNA structures on the way. In contrast, the following ribosomes can dock at the start codon only when the first ribosome has vacated the initiation site. Here we show that loading of the second ribosome on a natural 38-nt-long 5′ untranslated region of lpp mRNA, which codes for the outer membrane lipoprotein from Escherichia coli, takes place before the leading ribosome has moved away from the start codon. The rapid formation of this standby complex depends on the presence of ribosomal proteins S1/S2 in the leading ribosome. The early recruitment of the second ribosome to the standby site before translation by the leading ribosome and the tight coupling between translation elongation by the first ribosome and the accommodation of the second ribosome can contribute to high translational efficiency of the lpp mRNA. PMID:29632209

  19. XML Translator for Interface Descriptions

    Science.gov (United States)

    Boroson, Elizabeth R.

    2009-01-01

    A computer program defines an XML schema for specifying the interface to a generic FPGA from the perspective of software that will interact with the device. This XML interface description is then translated into header files for C, Verilog, and VHDL. User interface definition input is checked via both the provided XML schema and the translator module to ensure consistency and accuracy. Currently, programming used on both sides of an interface is inconsistent. This makes it hard to find and fix errors. By using a common schema, both sides are forced to use the same structure by using the same framework and toolset. This makes for easy identification of problems, which leads to the ability to formulate a solution. The toolset contains constants that allow a programmer to use each register, and to access each field in the register. Once programming is complete, the translator is run as part of the make process, which ensures that whenever an interface is changed, all of the code that uses the header files describing it is recompiled.

  20. Transmitter and Translating Receiver Design For 64-ary Pulse Position Modulation (PPM)

    Energy Technology Data Exchange (ETDEWEB)

    Mendez, A J; Hernandez, V J; Gagliardi, R M; Bennett, C V

    2010-01-20

    This paper explores the architecture and design of an optically-implemented 64-ary PPM transmitter and direct-translating receiver that effectively translates incoming electrically-generated bit streams into optical PPM symbols (and vice-versa) at > 1 Gb/s data rates. The PPM transmitter is a cascade of optical switches operating at the frame rate. A corresponding receiver design is more difficult to architect and implement, since increasing data rates lead to correspondingly shorter decision times (slot times and frame times). We describe a solution in the form of a time-to-space mapping arrayed receiver that performs a translating algorithm represented as a code map. The technique for generating the code map is described, and the implementation of the receiver as a planar lightwave circuit is given. The techniques for implementing the transmitter and receiver can be generalized for any case of M-ary PPM.

  1. The Impact of Machine Translation and Computer-aided Translation on Translators

    Science.gov (United States)

    Peng, Hao

    2018-03-01

    Under the context of globalization, communications between countries and cultures are becoming increasingly frequent, which make it imperative to use some techniques to help translate. This paper is to explore the influence of computer-aided translation on translators, which is derived from the field of the computer-aided translation (CAT) and machine translation (MT). Followed by an introduction to the development of machine and computer-aided translation, it then depicts the technologies practicable to translators, which are trying to analyze the demand of designing the computer-aided translation so far in translation practice, and optimize the designation of computer-aided translation techniques, and analyze its operability in translation. The findings underline the advantages and disadvantages of MT and CAT tools, and the serviceability and future development of MT and CAT technologies. Finally, this thesis probes into the impact of these new technologies on translators in hope that more translators and translation researchers can learn to use such tools to improve their productivity.

  2. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    Science.gov (United States)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  3. Translational Control of Cell Division by Elongator

    Directory of Open Access Journals (Sweden)

    Fanelie Bauer

    2012-05-01

    Full Text Available Elongator is required for the synthesis of the mcm5s2 modification found on tRNAs recognizing AA-ending codons. In order to obtain a global picture of the role of Elongator in translation, we used reverse protein arrays to screen the fission yeast proteome for translation defects. Unexpectedly, this revealed that Elongator inactivation mainly affected three specific functional groups including proteins implicated in cell division. The absence of Elongator results in a delay in mitosis onset and cytokinesis defects. We demonstrate that the kinase Cdr2, which is a central regulator of mitosis and cytokinesis, is under translational control by Elongator due to the Lysine codon usage bias of the cdr2 coding sequence. These findings uncover a mechanism by which the codon usage, coupled to tRNA modifications, fundamentally contributes to gene expression and cellular functions.

  4. Ready, steady… Code!

    CERN Multimedia

    Anaïs Schaeffer

    2013-01-01

    This summer, CERN took part in the Google Summer of Code programme for the third year in succession. Open to students from all over the world, this programme leads to very successful collaborations for open source software projects.   Image: GSoC 2013. Google Summer of Code (GSoC) is a global programme that offers student developers grants to write code for open-source software projects. Since its creation in 2005, the programme has brought together some 6,000 students from over 100 countries worldwide. The students selected by Google are paired with a mentor from one of the participating projects, which can be led by institutes, organisations, companies, etc. This year, CERN PH Department’s SFT (Software Development for Experiments) Group took part in the GSoC programme for the third time, submitting 15 open-source projects. “Once published on the Google Summer for Code website (in April), the projects are open to applications,” says Jakob Blomer, one of the o...

  5. SHIFT: server for hidden stops analysis in frame-shifted translation.

    Science.gov (United States)

    Gupta, Arun; Singh, Tiratha Raj

    2013-02-23

    Frameshift is one of the three classes of recoding. Frame-shifts lead to waste of energy, resources and activity of the biosynthetic machinery. In addition, some peptides synthesized after frame-shifts are probably cytotoxic which serve as plausible cause for innumerable number of diseases and disorders such as muscular dystrophies, lysosomal storage disorders, and cancer. Hidden stop codons occur naturally in coding sequences among all organisms. These codons are associated with the early termination of translation for incorrect reading frame selection and help to reduce the metabolic cost related to the frameshift events. Researchers have identified several consequences of hidden stop codons and their association with myriad disorders. However the wealth of information available is speckled and not effortlessly acquiescent to data-mining. To reduce this gap, this work describes an algorithmic web based tool to study hidden stops in frameshifted translation for all the lineages through respective genetic code systems. This paper describes SHIFT, an algorithmic web application tool that provides a user-friendly interface for identifying and analyzing hidden stops in frameshifted translation of genomic sequences for all available genetic code systems. We have calculated the correlation between codon usage frequencies and the plausible contribution of codons towards hidden stops in an off-frame context. Markovian chains of various order have been used to model hidden stops in frameshifted peptides and their evolutionary association with naturally occurring hidden stops. In order to obtain reliable and persuasive estimates for the naturally occurring and predicted hidden stops statistical measures have been implemented. This paper presented SHIFT, an algorithmic tool that allows user-friendly exploration, analysis, and visualization of hidden stop codons in frameshifted translations. It is expected that this web based tool would serve as a useful complement for

  6. Frozen Accident Pushing 50: Stereochemistry, Expansion, and Chance in the Evolution of the Genetic Code.

    Science.gov (United States)

    Koonin, Eugene V

    2017-05-23

    Nearly 50 years ago, Francis Crick propounded the frozen accident scenario for the evolution of the genetic code along with the hypothesis that the early translation system consisted primarily of RNA. Under the frozen accident perspective, the code is universal among modern life forms because any change in codon assignment would be highly deleterious. The frozen accident can be considered the default theory of code evolution because it does not imply any specific interactions between amino acids and the cognate codons or anticodons, or any particular properties of the code. The subsequent 49 years of code studies have elucidated notable features of the standard code, such as high robustness to errors, but failed to develop a compelling explanation for codon assignments. In particular, stereochemical affinity between amino acids and the cognate codons or anticodons does not seem to account for the origin and evolution of the code. Here, I expand Crick's hypothesis on RNA-only translation system by presenting evidence that this early translation already attained high fidelity that allowed protein evolution. I outline an experimentally testable scenario for the evolution of the code that combines a distinct version of the stereochemical hypothesis, in which amino acids are recognized via unique sites in the tertiary structure of proto-tRNAs, rather than by anticodons, expansion of the code via proto-tRNA duplication, and the frozen accident.

  7. IRSN Code of Ethics and Professional Conduct. Annex VII [TSO Mission Statement and Code of Ethics

    International Nuclear Information System (INIS)

    2018-01-01

    IRSN has adopted, in 2013, a Code of Ethics and Professional Conduct, the contents of which are summarized. As a preamble, it is indicated that the Code, which was adopted in 2013 by the Ethics Commission of IRSN and the Board of IRSN, complies with relevant constitutional and legal requirements. The introduction to the Code presents the role and missions of IRSN in the French system, as well as the various conditions and constraints that frame its action, in particular with respect to ethical issues. It states that the Code sets principles and establishes guidance for addressing these constraints and resolving conflicts that may arise, thus constituting references for the Institute and its staff, and helping IRSN’s partners in their interaction with the Institute. The stipulations of the Code are organized in four articles, reproduced and translated.

  8. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  9. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  10. Modelling RF sources using 2-D PIC codes

    International Nuclear Information System (INIS)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (''port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation

  11. N-terminal Proteomics Assisted Profiling of the Unexplored Translation Initiation Landscape in Arabidopsis thaliana.

    Science.gov (United States)

    Willems, Patrick; Ndah, Elvis; Jonckheere, Veronique; Stael, Simon; Sticker, Adriaan; Martens, Lennart; Van Breusegem, Frank; Gevaert, Kris; Van Damme, Petra

    2017-06-01

    Proteogenomics is an emerging research field yet lacking a uniform method of analysis. Proteogenomic studies in which N-terminal proteomics and ribosome profiling are combined, suggest that a high number of protein start sites are currently missing in genome annotations. We constructed a proteogenomic pipeline specific for the analysis of N-terminal proteomics data, with the aim of discovering novel translational start sites outside annotated protein coding regions. In summary, unidentified MS/MS spectra were matched to a specific N-terminal peptide library encompassing protein N termini encoded in the Arabidopsis thaliana genome. After a stringent false discovery rate filtering, 117 protein N termini compliant with N-terminal methionine excision specificity and indicative of translation initiation were found. These include N-terminal protein extensions and translation from transposable elements and pseudogenes. Gene prediction provided supporting protein-coding models for approximately half of the protein N termini. Besides the prediction of functional domains (partially) contained within the newly predicted ORFs, further supporting evidence of translation was found in the recently released Araport11 genome re-annotation of Arabidopsis and computational translations of sequences stored in public repositories. Most interestingly, complementary evidence by ribosome profiling was found for 23 protein N termini. Finally, by analyzing protein N-terminal peptides, an in silico analysis demonstrates the applicability of our N-terminal proteogenomics strategy in revealing protein-coding potential in species with well- and poorly-annotated genomes. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  12. Interdialect Translatability of the Basic Programming Language.

    Science.gov (United States)

    Isaacs, Gerald L.

    A study was made of several dialects of the Beginner's All-purpose Symbolic Instruction Code (BASIC). The purpose was to determine if it was possible to identify a set of interactive BASIC dialects in which translatability between different members of the set would be high, if reasonable programing restrictions were imposed. It was first…

  13. Validation of the Open Source Code_Aster Software Used in the Modal Analysis of the Fluid-filled Cylindrical Shell

    Directory of Open Access Journals (Sweden)

    B D. Kashfutdinov

    2017-01-01

    Full Text Available The paper deals with a modal analysis of the elastic cylindrical shell with a clamped bottom partially filled with fluid in open source Code_Aster software using the finite element method. Natural frequencies and modes obtained in Code_Aster are compared to experimental and theoretical data. The aim of this paper is to prove that Code_Aster has all necessary tools for solving fluid structure interaction problems. Also, Code_Aster can be used in the industrial projects as an alternative to commercial software. The available free pre- and post-processors with a graphical user interface that is compatible with Code_Aster allow creating complex models and processing the results.The paper presents new validation results of open source Code_Aster software used to calculate small natural modes of the cylindrical shell partially filled with non-viscous compressible barotropic fluid under gravity field.The displacement of the middle surface of thin shell and the displacement of the fluid relative to the equilibrium position are described by coupled hydro-elasticity problem. The fluid flow is considered to be potential. The finite element method (FEM is used. The features of computational model are described. The resolution equation has symmetrical block matrices. To compare the results, is discussed the well-known modal analysis problem of cylindrical shell with flat non-deformable bottom, filled with a compressible fluid. The numerical parameters of the scheme were chosen in accordance with well-known experimental and analytical data. Three cases were taken into account: an empty, a partially filled and a full-filled cylindrical shell.The frequencies of Code_Aster are in good agreement with those, obtained in experiment, analytical solution, as well as with results obtained by FEM in other software. The difference between experiment and analytical solution in software is approximately the same. The obtained results extend a set of validation tests for

  14. Evidence Translation in a Youth Mental Health Service

    Directory of Open Access Journals (Sweden)

    Alan P. Bailey

    2016-02-01

    Full Text Available An evidence–practice gap is well established in the mental health field, and knowledge translation is identified as a key strategy to bridge the gap. This study outlines a knowledge translation strategy, which aims to support clinicians in using evidence in their practice within a youth mental health service (headspace. We aim to evaluate the strategy by exploring clinicians’ experiences and preferences. The translation strategy includes the creation and dissemination of evidence translation resources that summarize the best available evidence and practice guidelines relating to the management of young people with mental disorders. Semi-structured interviews were conducted with 14 youth mental health clinicians covering three topics: experiences with evidence translation resources, preferences for evidence presentation, and suggestions regarding future translation efforts. Interviews were recorded, transcribed verbatim, coded, and analyzed using thematic analysis. Themes were both predetermined by interview topic and identified freely from the data. Clinicians described their experiences with the evidence translation resources as informing decision making, providing a knowledge base, and instilling clinical confidence. Clinicians expressed a preference for brief, plain language summaries and for involvement and consultation during the creation and dissemination of resources. Suggestions to improve the dissemination strategy and the development of new areas for evidence resources were identified. The knowledge translation efforts described support clinicians in the provision of mental health services for young people. The preferences and experiences described have valuable implications for services implementing knowledge translation strategies.

  15. Translation Theory 'Translated'

    DEFF Research Database (Denmark)

    Wæraas, Arild; Nielsen, Jeppe

    2016-01-01

    Translation theory has proved to be a versatile analytical lens used by scholars working from different traditions. On the basis of a systematic literature review, this study adds to our understanding of the ‘translations’ of translation theory by identifying the distinguishing features of the most...... common theoretical approaches to translation within the organization and management discipline: actor-network theory, knowledge-based theory, and Scandinavian institutionalism. Although each of these approaches already has borne much fruit in research, the literature is diverse and somewhat fragmented......, but also overlapping. We discuss the ways in which the three versions of translation theory may be combined and enrich each other so as to inform future research, thereby offering a more complete understanding of translation in and across organizational settings....

  16. GRHydro: a new open-source general-relativistic magnetohydrodynamics code for the Einstein toolkit

    International Nuclear Information System (INIS)

    Mösta, Philipp; Haas, Roland; Ott, Christian D; Reisswig, Christian; Mundim, Bruno C; Faber, Joshua A; Noble, Scott C; Bode, Tanja; Löffler, Frank; Schnetter, Erik

    2014-01-01

    We present the new general-relativistic magnetohydrodynamics (GRMHD) capabilities of the Einstein toolkit, an open-source community-driven numerical relativity and computational relativistic astrophysics code. The GRMHD extension of the toolkit builds upon previous releases and implements the evolution of relativistic magnetized fluids in the ideal MHD limit in fully dynamical spacetimes using the same shock-capturing techniques previously applied to hydrodynamical evolution. In order to maintain the divergence-free character of the magnetic field, the code implements both constrained transport and hyperbolic divergence cleaning schemes. We present test results for a number of MHD tests in Minkowski and curved spacetimes. Minkowski tests include aligned and oblique planar shocks, cylindrical explosions, magnetic rotors, Alfvén waves and advected loops, as well as a set of tests designed to study the response of the divergence cleaning scheme to numerically generated monopoles. We study the code’s performance in curved spacetimes with spherical accretion onto a black hole on a fixed background spacetime and in fully dynamical spacetimes by evolutions of a magnetized polytropic neutron star and of the collapse of a magnetized stellar core. Our results agree well with exact solutions where these are available and we demonstrate convergence. All code and input files used to generate the results are available on http://einsteintoolkit.org. This makes our work fully reproducible and provides new users with an introduction to applications of the code. (paper)

  17. Neutron spallation source and the Dubna cascade code

    CERN Document Server

    Kumar, V; Goel, U; Barashenkov, V S

    2003-01-01

    Neutron multiplicity per incident proton, n/p, in collision of high energy proton beam with voluminous Pb and W targets has been estimated from the Dubna cascade code and compared with the available experimental data for the purpose of benchmarking of the code. Contributions of various atomic and nuclear processes for heat production and isotopic yield of secondary nuclei are also estimated to assess the heat and radioactivity conditions of the targets. Results obtained from the code show excellent agreement with the experimental data at beam energy, E < 1.2 GeV and differ maximum up to 25% at higher energy. (author)

  18. A New Source Biasing Approach in ADVANTG

    International Nuclear Information System (INIS)

    Bevill, Aaron M.; Mosher, Scott W.

    2012-01-01

    The ADVANTG code has been developed at Oak Ridge National Laboratory to generate biased sources and weight window maps for MCNP using the CADIS and FW-CADIS methods. In preparation for an upcoming RSICC release, a new approach for generating a biased source has been developed. This improvement streamlines user input and improves reliability. Previous versions of ADVANTG generated the biased source from ADVANTG input, writing an entirely new general fixed-source definition (SDEF). Because volumetric sources were translated into SDEF-format as a finite set of points, the user had to perform a convergence study to determine whether the number of source points used accurately represented the source region. Further, the large number of points that must be written in SDEF-format made the MCNP input and output files excessively long and difficult to debug. ADVANTG now reads SDEF-format distributions and generates corresponding source biasing cards, eliminating the need for a convergence study. Many problems of interest use complicated source regions that are defined using cell rejection. In cell rejection, the source distribution in space is defined using an arbitrarily complex cell and a simple bounding region. Source positions are sampled within the bounding region but accepted only if they fall within the cell; otherwise, the position is resampled entirely. When biasing in space is applied to sources that use rejection sampling, current versions of MCNP do not account for the rejection in setting the source weight of histories, resulting in an 'unfair game'. This problem was circumvented in previous versions of ADVANTG by translating volumetric sources into a finite set of points, which does not alter the mean history weight ((bar w)). To use biasing parameters without otherwise modifying the original cell-rejection SDEF-format source, ADVANTG users now apply a correction factor for (bar w) in post-processing. A stratified-random sampling approach in ADVANTG is under

  19. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    Energy Technology Data Exchange (ETDEWEB)

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C.

  20. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    International Nuclear Information System (INIS)

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C

  1. Machine Translation and Other Translation Technologies.

    Science.gov (United States)

    Melby, Alan

    1996-01-01

    Examines the application of linguistic theory to machine translation and translator tools, discusses the use of machine translation and translator tools in the real world of translation, and addresses the impact of translation technology on conceptions of language and other issues. Findings indicate that the human mind is flexible and linguistic…

  2. A statistical–mechanical view on source coding: physical compression and data compression

    International Nuclear Information System (INIS)

    Merhav, Neri

    2011-01-01

    We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics

  3. SU-E-T-212: Comparison of TG-43 Dosimetric Parameters of Low and High Energy Brachytherapy Sources Obtained by MCNP Code Versions of 4C, X and 5

    Energy Technology Data Exchange (ETDEWEB)

    Zehtabian, M; Zaker, N; Sina, S [Shiraz University, Shiraz, Fars (Iran, Islamic Republic of); Meigooni, A Soleimani [Comprehensive Cancer Center of Nevada, Las Vegas, Nevada (United States)

    2015-06-15

    Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 which is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.

  4. PATSTAGS - PATRAN-STAGSC-1 TRANSLATOR

    Science.gov (United States)

    Otte, N. E.

    1994-01-01

    PATSTAGS translates PATRAN finite model data into STAGS (Structural Analysis of General Shells) input records to be used for engineering analysis. The program reads data from a PATRAN neutral file and writes STAGS input records into a STAGS input file and a UPRESS data file. It is able to support translations of nodal constraints, nodal, element, force and pressure data. PATSTAGS uses three files: the PATRAN neutral file to be translated, a STAGS input file and a STAGS pressure data file. The user provides the names for the neutral file and the desired names of the STAGS files to be created. The pressure data file contains the element live pressure data used in the STAGS subroutine UPRESS. PATSTAGS is written in FORTRAN 77 for DEC VAX series computers running VMS. The main memory requirement for execution is approximately 790K of virtual memory. Output blocks can be modified to output the data in any format desired, allowing the program to be used to translate model data to analysis codes other than STAGSC-1 (HQN-10967). This program is available in DEC VAX BACKUP format on a 9-track magnetic tape or TK50 tape cartridge. Documentation is included in the price of the program. PATSTAGS was developed in 1990. DEC, VAX, TK50 and VMS are trademarks of Digital Equipment Corporation.

  5. MPEG-compliant joint source/channel coding using discrete cosine transform and substream scheduling for visual communication over packet networks

    Science.gov (United States)

    Kim, Seong-Whan; Suthaharan, Shan; Lee, Heung-Kyu; Rao, K. R.

    2001-01-01

    Quality of Service (QoS)-guarantee in real-time communication for multimedia applications is significantly important. An architectural framework for multimedia networks based on substreams or flows is effectively exploited for combining source and channel coding for multimedia data. But the existing frame by frame approach which includes Moving Pictures Expert Group (MPEG) cannot be neglected because it is a standard. In this paper, first, we designed an MPEG transcoder which converts an MPEG coded stream into variable rate packet sequences to be used for our joint source/channel coding (JSCC) scheme. Second, we designed a classification scheme to partition the packet stream into multiple substreams which have their own QoS requirements. Finally, we designed a management (reservation and scheduling) scheme for substreams to support better perceptual video quality such as the bound of end-to-end jitter. We have shown that our JSCC scheme is better than two other two popular techniques by simulation and real video experiments on the TCP/IP environment.

  6. EXPLORING THEORETICAL FUNCTIONS OF CORPUS DATA IN TEACHING TRANSLATION

    OpenAIRE

    Poirier, Éric

    2016-01-01

    Abstract As language referential data banks, corpora are instrumental in the exploration of translation solutions in bilingual parallel texts or conventional usages of source or target language in monolingual general or specialized texts. These roles are firmly rooted in translation processes, from analysis and interpretation of source text to searching for an acceptable equivalent and integrating it into the production of the target text. Provided the creative and not the conservative way be...

  7. Fulcrum Network Codes

    DEFF Research Database (Denmark)

    2015-01-01

    Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....

  8. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  9. Performance limitations of translationally symmetric nonimaging devices

    Science.gov (United States)

    Bortz, John C.; Shatz, Narkis E.; Winston, Roland

    2001-11-01

    The component of the optical direction vector along the symmetry axis is conserved for all rays propagated through a translationally symmetric optical device. This quality, referred to herein as the translational skew invariant, is analogous to the conventional skew invariant, which is conserved in rotationally symmetric optical systems. The invariance of both of these quantities is a consequence of Noether's theorem. We show how performance limits for translationally symmetric nonimaging optical devices can be derived from the distributions of the translational skew invariant for the optical source and for the target to which flux is to be transferred. Examples of computed performance limits are provided. In addition, we show that a numerically optimized non-tracking solar concentrator utilizing symmetry-breaking surface microstructure can overcome the performance limits associated with translational symmetry. The optimized design provides a 47.4% increase in efficiency and concentration relative to an ideal translationally symmetric concentrator.

  10. Source traditions and target poetics: translation and lexical issues regarding the works of Bernat Metge and Ausiàs March

    Directory of Open Access Journals (Sweden)

    Cabré, Lluís

    2015-06-01

    Full Text Available This article studies two opposing tendencies in the lexical choices of translators. While at times the selection of vocabulary unfolds the literary traditions of source texts, on other occasions translators deploy target poetic registers that are absent from the source text. The authors illustrate these strategies with attention to two medieval Catalan authors: Bernat Metge (ca. 1348- 1413 and Ausiàs March (1400-1459. Metge wrote his Llibre de Fortuna i Prudència (ca. 1381 in a Romance genre of considerable Occitan ascent. Latin works, however, were the actual source of inspiration for key components of his work, including vocabulary and idiomatic expressions. The translation of March’s poetry during the early-modern period offers a complementary perspective. March’s Renaissance translators and imitators carefully selected certain words for their renditions of March’s verses in view of the significance of those terms for the poetic culture of their own time.En este artículo se estudian dos tendencias opuestas en las decisiones que toman los traductores con respecto a la selección del léxico. Así como en algunos casos el vocabulario seleccionado por parte del traductor desarrolla tradiciones literarias ya presentes en el original, en otros despliega registros poéticos propios solamente de la tradición en la que se pretende inscribir la obra traducida. Ambas estrategias se ilustran con respecto a la obra de dos autores catalanes medievales: Bernat Metge (ca. 1348-1413 y Ausiàs March (1400-1459. En el caso de Bernat Metge, particularmente en el Llibre de Fortuna i Prudència (ca. 1381, se puede observar que tanto la selección léxica como el uso de modismos responden al trasfondo de obras latinas, aunque se adapten al marco de un género literario románico con presencia del occitano. Los versos de Ausiàs March (1400-1459, imitados y traducidos por escritores del Renacimiento hispánico, permiten completar el análisis del fen

  11. An Introduction to the Ambiguity Tolerance: As a Source of Variation in English-Persian Translation

    Directory of Open Access Journals (Sweden)

    Hooshang Khoshsima

    2017-05-01

    Full Text Available Different individuals provide different translations of different qualities of the same text. This may be due to one’s dominant cognitive style and individuals’ particular personal characteristics (Khoshsima & Hashemi Toroujeni, 2017 in general or ambiguity tolerance in particular. A certain degree of ambiguity tolerance (henceforth AI has been found to facilitate language learning (Chapelle, 1983; Ehrman, 1999; Ely, 1995. However, this influential factor has been largely overlooked in translation studies. The purpose of this study was to find the relationship between AT and translation quality by identifying the expected positive correlation between the level of AT and the numbers of translation errors. Out of the 56 undergraduates of English-Persian Translation at Chabahar Maritime University (CMU, a sample of 34 top students was selected based on their scores on the reading comprehension which enjoys a special focus in many contexts (Khoshsima & Rezaeian Tiyar, 2014 and structure subtests of the TOEFL. The participants responded to the SLTAS questionnaire for AT developed by Ely (1995. The questionnaire had a high alpha internal consistency reliability of .84 and standardized item alpha of .84. In the next stage of the research, the participants translated a short passage of contemporary English into Persian, which was assessed using the SICAL III scale for TQA developed and used by Canadian Government’s Translation Bureau as its official TQA model (Williams, 1989.  Then, to find the relationship between the level of ambiguity tolerance in undergraduates of English-Persian translation at Chabahar Maritime University and their translation quality, analysis of the collected data revealed a significant positive correlation (r=440, p<.05 between the participants’ degree of AT and the numbers of errors in their translations. Controlling for SL proficiency, the correlation was still significantly positive (r=.397, p<.05. Accordingly, it

  12. Open-source tool for automatic import of coded surveying data to multiple vector layers in GIS environment

    Directory of Open Access Journals (Sweden)

    Eva Stopková

    2016-12-01

    Full Text Available This paper deals with a tool that enables import of the coded data in a singletext file to more than one vector layers (including attribute tables, together withautomatic drawing of line and polygon objects and with optional conversion toCAD. Python script v.in.survey is available as an add-on for open-source softwareGRASS GIS (GRASS Development Team. The paper describes a case study basedon surveying at the archaeological mission at Tell-el Retaba (Egypt. Advantagesof the tool (e.g. significant optimization of surveying work and its limits (demandson keeping conventions for the points’ names coding are discussed here as well.Possibilities of future development are suggested (e.g. generalization of points’names coding or more complex attribute table creation.

  13. Signals Involved in Regulation of Hepatitis C Virus RNA Genome Translation and Replication

    Directory of Open Access Journals (Sweden)

    Michael Niepmann

    2018-03-01

    Full Text Available Hepatitis C virus (HCV preferentially replicates in the human liver and frequently causes chronic infection, often leading to cirrhosis and liver cancer. HCV is an enveloped virus classified in the genus Hepacivirus in the family Flaviviridae and has a single-stranded RNA genome of positive orientation. The HCV RNA genome is translated and replicated in the cytoplasm. Translation is controlled by the Internal Ribosome Entry Site (IRES in the 5′ untranslated region (5′ UTR, while also downstream elements like the cis-replication element (CRE in the coding region and the 3′ UTR are involved in translation regulation. The cis-elements controlling replication of the viral RNA genome are located mainly in the 5′- and 3′-UTRs at the genome ends but also in the protein coding region, and in part these signals overlap with the signals controlling RNA translation. Many long-range RNA–RNA interactions (LRIs are predicted between different regions of the HCV RNA genome, and several such LRIs are actually involved in HCV translation and replication regulation. A number of RNA cis-elements recruit cellular RNA-binding proteins that are involved in the regulation of HCV translation and replication. In addition, the liver-specific microRNA-122 (miR-122 binds to two target sites at the 5′ end of the viral RNA genome as well as to at least three additional target sites in the coding region and the 3′ UTR. It is involved in the regulation of HCV RNA stability, translation and replication, thereby largely contributing to the hepatotropism of HCV. However, we are still far from completely understanding all interactions that regulate HCV RNA genome translation, stability, replication and encapsidation. In particular, many conclusions on the function of cis-elements in HCV replication have been obtained using full-length HCV genomes or near-full-length replicon systems. These include both genome ends, making it difficult to decide if a cis-element in

  14. Signals Involved in Regulation of Hepatitis C Virus RNA Genome Translation and Replication.

    Science.gov (United States)

    Niepmann, Michael; Shalamova, Lyudmila A; Gerresheim, Gesche K; Rossbach, Oliver

    2018-01-01

    Hepatitis C virus (HCV) preferentially replicates in the human liver and frequently causes chronic infection, often leading to cirrhosis and liver cancer. HCV is an enveloped virus classified in the genus Hepacivirus in the family Flaviviridae and has a single-stranded RNA genome of positive orientation. The HCV RNA genome is translated and replicated in the cytoplasm. Translation is controlled by the Internal Ribosome Entry Site (IRES) in the 5' untranslated region (5' UTR), while also downstream elements like the cis -replication element (CRE) in the coding region and the 3' UTR are involved in translation regulation. The cis -elements controlling replication of the viral RNA genome are located mainly in the 5'- and 3'-UTRs at the genome ends but also in the protein coding region, and in part these signals overlap with the signals controlling RNA translation. Many long-range RNA-RNA interactions (LRIs) are predicted between different regions of the HCV RNA genome, and several such LRIs are actually involved in HCV translation and replication regulation. A number of RNA cis -elements recruit cellular RNA-binding proteins that are involved in the regulation of HCV translation and replication. In addition, the liver-specific microRNA-122 (miR-122) binds to two target sites at the 5' end of the viral RNA genome as well as to at least three additional target sites in the coding region and the 3' UTR. It is involved in the regulation of HCV RNA stability, translation and replication, thereby largely contributing to the hepatotropism of HCV. However, we are still far from completely understanding all interactions that regulate HCV RNA genome translation, stability, replication and encapsidation. In particular, many conclusions on the function of cis -elements in HCV replication have been obtained using full-length HCV genomes or near-full-length replicon systems. These include both genome ends, making it difficult to decide if a cis -element in question acts on HCV

  15. A Spanish version for the new ERA-EDTA coding system for primary renal disease

    Directory of Open Access Journals (Sweden)

    Óscar Zurriaga

    2015-07-01

    Conclusions: Translation and adaptation into Spanish represent an improvement that will help to introduce and use the new coding system for PKD, as it can help reducing the time devoted to coding and also the period of adaptation of health workers to the new codes.

  16. The European source term code ESTER - basic ideas and tools for coupling of ATHLET and ESTER

    International Nuclear Information System (INIS)

    Schmidt, F.; Schuch, A.; Hinkelmann, M.

    1993-04-01

    The French software house CISI and IKE of the University of Stuttgart have developed during 1990 and 1991 in the frame of the Shared Cost Action Reactor Safety the informatic structure of the European Source TERm Evaluation System (ESTER). Due to this work tools became available which allow to unify on an European basis both code development and code application in the area of severe core accident research. The behaviour of reactor cores is determined by thermal hydraulic conditions. Therefore for the development of ESTER it was important to investigate how to integrate thermal hydraulic code systems with ESTER applications. This report describes the basic ideas of ESTER and improvements of ESTER tools in view of a possible coupling of the thermal hydraulic code system ATHLET and ESTER. Due to the work performed during this project the ESTER tools became the most modern informatic tools presently available in the area of severe accident research. A sample application is given which demonstrates the use of the new tools. (orig.) [de

  17. Nuclear safety code study

    Energy Technology Data Exchange (ETDEWEB)

    Hu, H.H.; Ford, D.; Le, H.; Park, S.; Cooke, K.L.; Bleakney, T.; Spanier, J.; Wilburn, N.P.; O' Reilly, B.; Carmichael, B.

    1981-01-01

    The objective is to analyze an overpower accident in an LMFBR. A simplified model of the primary coolant loop was developed in order to understand the instabilities encountered with the MELT III and SAS codes. The computer programs were translated for switching to the IBM 4331. Numerical methods were investigated for solving the neutron kinetics equations; the Adams and Gear methods were compared. (DLC)

  18. Optimization of translation profiles enhances protein expression and solubility.

    Directory of Open Access Journals (Sweden)

    Anne-Katrin Hess

    Full Text Available mRNA is translated with a non-uniform speed that actively coordinates co-translational folding of protein domains. Using structure-based homology we identified the structural domains in epoxide hydrolases (EHs and introduced slow-translating codons to delineate the translation of single domains. These changes in translation speed dramatically improved the solubility of two EHs of metagenomic origin in Escherichia coli. Conversely, the importance of transient attenuation for the folding, and consequently solubility, of EH was evidenced with a member of the EH family from Agrobacterium radiobacter, which partitions in the soluble fraction when expressed in E. coli. Synonymous substitutions of codons shaping the slow-transiting regions to fast-translating codons render this protein insoluble. Furthermore, we show that low protein yield can be enhanced by decreasing the free folding energy of the initial 5'-coding region, which can disrupt mRNA secondary structure and enhance ribosomal loading. This study provides direct experimental evidence that mRNA is not a mere messenger for translation of codons into amino acids but bears an additional layer of information for folding, solubility and expression level of the encoded protein. Furthermore, it provides a general frame on how to modulate and fine-tune gene expression of a target protein.

  19. Calculation Of Fuel Burnup And Radionuclide Inventory In The Syrian Miniature Neutron Source Reactor Using The GETERA Code

    International Nuclear Information System (INIS)

    Khattab, K.; Dawahra, S.

    2011-01-01

    Calculations of the fuel burnup and radionuclide inventory in the Syrian Miniature Neutron Source Reactor (MNSR) after 10 years (the reactor core expected life) of the reactor operation time are presented in this paper using the GETERA code. The code is used to calculate the fuel group constants and the infinite multiplication factor versus the reactor operating time for 10, 20, and 30 kW operating power levels. The amounts of uranium burnup and plutonium produced in the reactor core, the concentrations and radionuclides of the most important fission product and actinide radionuclides accumulated in the reactor core, and the total radioactivity of the reactor core were calculated using the GETERA code as well. It is found that the GETERA code is better than the WIMSD4 code for the fuel burnup calculation in the MNSR reactor since it is newer and has a bigger library of isotopes and more accurate. (author)

  20. Shuttle-Data-Tape XML Translator

    Science.gov (United States)

    Barry, Matthew R.; Osborne, Richard N.

    2005-01-01

    JSDTImport is a computer program for translating native Shuttle Data Tape (SDT) files from American Standard Code for Information Interchange (ASCII) format into databases in other formats. JSDTImport solves the problem of organizing the SDT content, affording flexibility to enable users to choose how to store the information in a database to better support client and server applications. JSDTImport can be dynamically configured by use of a simple Extensible Markup Language (XML) file. JSDTImport uses this XML file to define how each record and field will be parsed, its layout and definition, and how the resulting database will be structured. JSDTImport also includes a client application programming interface (API) layer that provides abstraction for the data-querying process. The API enables a user to specify the search criteria to apply in gathering all the data relevant to a query. The API can be used to organize the SDT content and translate into a native XML database. The XML format is structured into efficient sections, enabling excellent query performance by use of the XPath query language. Optionally, the content can be translated into a Structured Query Language (SQL) database for fast, reliable SQL queries on standard database server computers.

  1. Machine translation with minimal reliance on parallel resources

    CERN Document Server

    Tambouratzis, George; Sofianopoulos, Sokratis

    2017-01-01

    This book provides a unified view on a new methodology for Machine Translation (MT). This methodology extracts information from widely available resources (extensive monolingual corpora) while only assuming the existence of a very limited parallel corpus, thus having a unique starting point to Statistical Machine Translation (SMT). In this book, a detailed presentation of the methodology principles and system architecture is followed by a series of experiments, where the proposed system is compared to other MT systems using a set of established metrics including BLEU, NIST, Meteor and TER. Additionally, a free-to-use code is available, that allows the creation of new MT systems. The volume is addressed to both language professionals and researchers. Prerequisites for the readers are very limited and include a basic understanding of the machine translation as well as of the basic tools of natural language processing.

  2. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  3. Exploring theoretical functions of corpus data in teaching translation

    OpenAIRE

    Éric Poirier

    2016-01-01

    http://dx.doi.org/10.5007/2175-7968.2016v36nesp1p177 As language referential data banks, corpora are instrumental in the exploration of translation solutions in bilingual parallel texts or conventional usages of source or target language in monolingual general or specialized texts. These roles are firmly rooted in translation processes, from analysis and interpretation of source text to searching for an acceptable equivalent and integrating it into the production of the target text. Provi...

  4. Addressing the Problem of Translatability when Translating the Russian Fiction Text into English (“Vanka” by A. Chekov

    Directory of Open Access Journals (Sweden)

    Надежда Алексеевна Дудик

    2014-12-01

    Full Text Available The most fundamental problem involved in the theory and practice of translation from one language into another consists in achieving adequacy between the source language (SL and the target language (TL. Adequacy can be reached by means of employing the communication strategy, i.e. by discussing the dialogue nature of the text in particular; by analyzing realia in translation in terms of the text as a whole, rather than as isolated units in the system of language; and by looking at how the semantic category of intensity influences the translatability of the Russian fiction text into English. The research has shown that the aspects examined are typical of Russian-English translation in general rather than of a single text.

  5. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  6. TRANSLATING ECONOMICS TEXTBOOKS: A CASE STUDY OF EPISTEMICIDE

    Directory of Open Access Journals (Sweden)

    KARNEDI

    2015-01-01

    Full Text Available As part of discourse in the social sciences, economics textbooks written in English in which knowledge has been transferred to other languages through translation have brought a certain impact on both the target language and the target culture. In terms of ideology, this article argues about the hegemonic status of the dominant language or culture that creates socalled epistemicide or the erosion of knowledge, partly due to translation strategies adopted by the translator. Investigation is done using the corpusbased approach, theories of translation strategies and the comparative model. The study reveals that the translator in the macro-level text adopts the ideology of foreignising strategy rather than domesticating strategy when translating an economics textbook from English into Indonesian. This is supported by the use of the number of the source language-orientated translation techniques leading to two translation methods (i.e. literal translation and faithful translation adopted in the micro-level text. This research strongly supports another relevant study pertaining to the globalisation of knowledge through translation and also the translation theories of equivalence (i.e. overt and covert translation. The research findings also have some pedagogical implications on teaching English for Specific Purposes in higher education.

  7. Developing Evaluation Skills with Legal Translation Trainees

    Directory of Open Access Journals (Sweden)

    Vîlceanu Titela

    2015-12-01

    Full Text Available Axiomatically, translation is twofold: an activity/process (more accurately designated by the term translating and a product (the term translation can be restricted to the product. It seems that the product dimension has gained increased importance, being the most visible part of translation as market-driven, design-oriented, precise and measurable - complying with specifications. Translation engenders a sequence: identification of text type and of end users’ needs (experts or non-experts in the field, evaluation of the complexity of the source text via global reading, followed by a close reading of its parts, the translating of the document, the translator’s checking of final version, editing and proofreading. The translator’s choices are accountable in point of cost-effectiveness (efficiency and effectiveness. Therefore, the legal translator should master the methodological toolkit, conceptual frame and related terminology, and adopt an inward-looking perspective (intuition, subjectivity, ingrained habits, insights deriving from his/her expertise and experience alongside an outward-looking one (working against objective criteria, standards of quality, benchmarks, etc.

  8. Machine Translation Using Constraint-Based Synchronous Grammar

    Institute of Scientific and Technical Information of China (English)

    WONG Fai; DONG Mingchui; HU Dongcheng

    2006-01-01

    A synchronous grammar based on the formalism of context-free grammar was developed by generalizing the first component of production that models the source text. Unlike other synchronous grammars,the grammar allows multiple target productions to be associated to a single production rule which can be used to guide a parser to infer different possible translational equivalences for a recognized input string according to the feature constraints of symbols in the pattern. An extended generalized LR algorithm was adapted to the parsing of the proposed formalism to analyze the syntactic structure of a language. The grammar was used as the basis for building a machine translation system for Portuguese to Chinese translation. The empirical results show that the grammar is more expressive when modeling the translational equivalences of parallel texts for machine translation and grammar rewriting applications.

  9. SCRIC: a code dedicated to the detailed emission and absorption of heterogeneous NLTE plasmas; application to xenon EUV sources; SCRIC: un code pour calculer l'absorption et l'emission detaillees de plasmas hors equilibre, inhomogenes et etendus; application aux sources EUV a base de xenon

    Energy Technology Data Exchange (ETDEWEB)

    Gaufridy de Dortan, F. de

    2006-07-01

    Nearly all spectral opacity codes for LTE and NLTE plasmas rely on configurations approximate modelling or even supra-configurations modelling for mid Z plasmas. But in some cases, configurations interaction (either relativistic and non relativistic) induces dramatic changes in spectral shapes. We propose here a new detailed emissivity code with configuration mixing to allow for a realistic description of complex mid Z plasmas. A collisional radiative calculation. based on HULLAC precise energies and cross sections. determines the populations. Detailed emissivities and opacities are then calculated and radiative transfer equation is resolved for wide inhomogeneous plasmas. This code is able to cope rapidly with very large amount of atomic data. It is therefore possible to use complex hydrodynamic files even on personal computers in a very limited time. We used this code for comparison with Xenon EUV sources within the framework of nano-lithography developments. It appears that configurations mixing strongly shifts satellite lines and must be included in the description of these sources to enhance their efficiency. (author)

  10. Gaze strategies can reveal the impact of source code features on the cognitive load of novice programmers

    DEFF Research Database (Denmark)

    Wulff-Jensen, Andreas; Ruder, Kevin Vignola; Triantafyllou, Evangelia

    2018-01-01

    As shown by several studies, programmers’ readability of source code is influenced by its structural and the textual features. In order to assess the importance of these features, we conducted an eye-tracking experiment with programming students. To assess the readability and comprehensibility of...

  11. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    Science.gov (United States)

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  12. New applications of Equinox code for real-time plasma equilibrium and profile reconstruction for tokamaks

    International Nuclear Information System (INIS)

    Bosak, K.; Blum, J.; Joffrin, E.

    2004-01-01

    Recent development of real-time equilibrium code Equinox using a fixed-point algorithm allow major plasma magnetic parameters to be identified in real-time, using rigorous analytical method. The code relies on the boundary flux code providing magnetic flux values on the first wall of vacuum vessel. By means of least-square minimization of differences between magnetic field obtained from previous solution and the next measurements the code identifies the source term of the non-linear Grad-Shafranov equation. The strict use of analytical equations together with a flexible algorithm offers an opportunity to include new measurements into stable magnetic equilibrium code and compare the results directly between several tokamaks while maintaining the same physical model (i.e. no iron model is necessary inside the equilibrium code). The successful implementation of this equilibrium code for JET and Tore Supra has already been published. In this paper, we show the preliminary results of predictive runs of the Equinox code using the ITER geometry. Because the real-time control experiments of plasma profile at JET using the code has been shown unstable when using magnetic and polarimetric measurements (that could be indirectly translated into accuracy vs robustness tradeoff), we plan an outline of the algorithm that will allow us to further constrain the plasma current profile using the central value of pressure of the plasma in real-time in order to better define the poloidal beta (this constraint is not necessary with purely magnetic equilibrium). (authors)

  13. New applications of Equinox code for real-time plasma equilibrium and profile reconstruction for tokamaks

    Energy Technology Data Exchange (ETDEWEB)

    Bosak, K.; Blum, J. [Universite de Nice-Sophia-Antipolis, Lab. J. A. Dieudonne, 06 - Nice (France); Joffrin, E. [Association Euratom-CEA Cadarache, 13 - Saint-Paul-lez-Durance (France). Dept. de Recherches sur la Fusion Controlee

    2004-07-01

    Recent development of real-time equilibrium code Equinox using a fixed-point algorithm allow major plasma magnetic parameters to be identified in real-time, using rigorous analytical method. The code relies on the boundary flux code providing magnetic flux values on the first wall of vacuum vessel. By means of least-square minimization of differences between magnetic field obtained from previous solution and the next measurements the code identifies the source term of the non-linear Grad-Shafranov equation. The strict use of analytical equations together with a flexible algorithm offers an opportunity to include new measurements into stable magnetic equilibrium code and compare the results directly between several tokamaks while maintaining the same physical model (i.e. no iron model is necessary inside the equilibrium code). The successful implementation of this equilibrium code for JET and Tore Supra has already been published. In this paper, we show the preliminary results of predictive runs of the Equinox code using the ITER geometry. Because the real-time control experiments of plasma profile at JET using the code has been shown unstable when using magnetic and polarimetric measurements (that could be indirectly translated into accuracy vs robustness tradeoff), we plan an outline of the algorithm that will allow us to further constrain the plasma current profile using the central value of pressure of the plasma in real-time in order to better define the poloidal beta (this constraint is not necessary with purely magnetic equilibrium). (authors)

  14. Linguistic Levels of Translation: A Generic Exploration of Translation Difficulties in Literary Textual Corpus

    Directory of Open Access Journals (Sweden)

    Magda Madkour

    2016-11-01

    Full Text Available This case study research was based on a generic exploration of the translation problems that graduate students face in literary translation. Literary translation is fundamental to translation programs at higher education due to the upsurge that has occurred in publishing classical and modern literary works from various cultures. However, literary texts have special characteristics that make the process of transferring them from one language into another a daunting task. Translating literary texts is difficult even for professional translators because misinterpreting the messages of the source texts can lead to distorting the aesthetic aspects of the literary work. Students need to learn various linguistic levels of literary translation as well as strategies and methods of translation. Learning the linguistics levels of translation necessitates providing adequate training that is based on enhancing students’ cognitive abilities. Cognitive-based translation training helps students learn the procedures of solving the problems of translating sound and literary devices. Cognitive approaches are relevant to the translation process since cognition implies mental activities that students can use to understand and synthesize the literary text, and reconstruct it creatively. Therefore, the current study aimed at examining the relationship between cognitive teaching methodologies and students’ performance in literary translation. To examine this relationship, qualitative and quantitative data was collected from graduate students at the College of Languages and Translation at Imam Mohammed bin Saud Islamic University (IMAMU University, Riyadh, Saudi Arabia. In addition, corpus data was gathered from authentic literary texts including, novels, short stories, and poetry, to investigate the effect of linguistic analysis and cognitive strategies on the quality of literary translation. Quantitative data was analyzed using the Statistical Package for the

  15. Just-in-Time Compilation-Inspired Methodology for Parallelization of Compute Intensive Java Code

    Directory of Open Access Journals (Sweden)

    GHULAM MUSTAFA

    2017-01-01

    Full Text Available Compute intensive programs generally consume significant fraction of execution time in a small amount of repetitive code. Such repetitive code is commonly known as hotspot code. We observed that compute intensive hotspots often possess exploitable loop level parallelism. A JIT (Just-in-Time compiler profiles a running program to identify its hotspots. Hotspots are then translated into native code, for efficient execution. Using similar approach, we propose a methodology to identify hotspots and exploit their parallelization potential on multicore systems. Proposed methodology selects and parallelizes each DOALL loop that is either contained in a hotspot method or calls a hotspot method. The methodology could be integrated in front-end of a JIT compiler to parallelize sequential code, just before native translation. However, compilation to native code is out of scope of this work. As a case study, we analyze eighteen JGF (Java Grande Forum benchmarks to determine parallelization potential of hotspots. Eight benchmarks demonstrate a speedup of up to 7.6x on an 8-core system

  16. Just-in-time compilation-inspired methodology for parallelization of compute intensive java code

    International Nuclear Information System (INIS)

    Mustafa, G.; Ghani, M.U.

    2017-01-01

    Compute intensive programs generally consume significant fraction of execution time in a small amount of repetitive code. Such repetitive code is commonly known as hotspot code. We observed that compute intensive hotspots often possess exploitable loop level parallelism. A JIT (Just-in-Time) compiler profiles a running program to identify its hotspots. Hotspots are then translated into native code, for efficient execution. Using similar approach, we propose a methodology to identify hotspots and exploit their parallelization potential on multicore systems. Proposed methodology selects and parallelizes each DOALL loop that is either contained in a hotspot method or calls a hotspot method. The methodology could be integrated in front-end of a JIT compiler to parallelize sequential code, just before native translation. However, compilation to native code is out of scope of this work. As a case study, we analyze eighteen JGF (Java Grande Forum) benchmarks to determine parallelization potential of hotspots. Eight benchmarks demonstrate a speedup of up to 7.6x on an 8-core system. (author)

  17. Translational informatics: an industry perspective.

    Science.gov (United States)

    Cantor, Michael N

    2012-01-01

    Translational informatics (TI) is extremely important for the pharmaceutical industry, especially as the bar for regulatory approval of new medications is set higher and higher. This paper will explore three specific areas in the drug development lifecycle, from tools developed by precompetitive consortia to standardized clinical data collection to the effective delivery of medications using clinical decision support, in which TI has a major role to play. Advancing TI will require investment in new tools and algorithms, as well as ensuring that translational issues are addressed early in the design process of informatics projects, and also given higher weight in funding or publication decisions. Ultimately, the source of translational tools and differences between academia and industry are secondary, as long as they move towards the shared goal of improving health.

  18. Reconciliation of international administrative coding systems for comparison of colorectal surgery outcome.

    Science.gov (United States)

    Munasinghe, A; Chang, D; Mamidanna, R; Middleton, S; Joy, M; Penninckx, F; Darzi, A; Livingston, E; Faiz, O

    2014-07-01

    Significant variation in colorectal surgery outcomes exists between different countries. Better understanding of the sources of variable outcomes using administrative data requires alignment of differing clinical coding systems. We aimed to map similar diagnoses and procedures across administrative coding systems used in different countries. Administrative data were collected in a central database as part of the Global Comparators (GC) Project. In order to unify these data, a systematic translation of diagnostic and procedural codes was undertaken. Codes for colorectal diagnoses, resections, operative complications and reoperative interventions were mapped across the respective national healthcare administrative coding systems. Discharge data from January 2006 to June 2011 for patients who had undergone colorectal surgical resections were analysed to generate risk-adjusted models for mortality, length of stay, readmissions and reoperations. In all, 52 544 case records were collated from 31 institutions in five countries. Mapping of all the coding systems was achieved so that diagnosis and procedures from the participant countries could be compared. Using the aligned coding systems to develop risk-adjusted models, the 30-day mortality rate for colorectal surgery was 3.95% (95% CI 0.86-7.54), the 30-day readmission rate was 11.05% (5.67-17.61), the 28-day reoperation rate was 6.13% (3.68-9.66) and the mean length of stay was 14 (7.65-46.76) days. The linkage of international hospital administrative data that we developed enabled comparison of documented surgical outcomes between countries. This methodology may facilitate international benchmarking. Colorectal Disease © 2014 The Association of Coloproctology of Great Britain and Ireland.

  19. Where do borders lie in translated literature? The case of the changing English-language market

    Directory of Open Access Journals (Sweden)

    Richard Michael Mansell

    2017-09-01

    Full Text Available Anecdotal accounts suggest that one reason for the perceived resistance to translated literature in English-language markets is that commissioning editors are averse to considering texts that they cannot read. In an attempt to overcome this barrier, English translations are increasingly commissioned by publishers of source texts and agents of source authors and used to stimulate interest in a book (not just in English-language markets, a phenomenon this article terms ‘source-commissioned translations’. This article considers how this phenomenon indicates a shift in the borders between literatures, how it disrupts accepted commercial practices, and the consequences of this for the industry and the role of English in the global book trade. In particular, it considers consequences for the quality of translations, questions regarding copyright, and the uncertain position for the translator when, at the time of translating, a contract is not in place between the translator and the publisher of the translation.

  20. Translation experiment of a plasma with field reversed configuration

    International Nuclear Information System (INIS)

    Tanjyo, Masayasu; Okada, Shigefumi; Ito, Yoshifumi; Kako, Masashi; Ohi, Shoichi

    1984-01-01

    Experiments to translate the FRC plasma from is formation area (pinch coil) into two kinds of metal vessels (magnetic flux conservers) with larger and smaller bore than that of the pinch coil have been carried out in OCT with an aim of improving the particle confinement time tau sub(N) by increasing xsub(s) (ratio of the plasma radius to that of the conducting wall). Demonstrated were successful translations of the plasma into both vessels. The xsub(s) of the translated plasma increased to 0.6 in the larger bore vessel and to 0.7 in the smaller one from 0.4 of the source plasma in the pinch coil. With the increase in xsub(s), tau sub(N) and also decay time of the trapped magnetic flux are extended from 15 - 20 μs of the source plasma to 50 - 80 μs. The tau sub(N) is found to have stronger dependence on xsub(s) than on rsub(s). During the translation phase, almost half of the total particle and the plasma energy are lost. The plasma volume is, therefore, about half of that expected from the analysis on the ideal translation process. It is also found that the translation process is nearly isothermal as is expected from the analysis. (author)

  1. Variable code gamma ray imaging system

    International Nuclear Information System (INIS)

    Macovski, A.; Rosenfeld, D.

    1979-01-01

    A gamma-ray source distribution in the body is imaged onto a detector using an array of apertures. The transmission of each aperture is modulated using a code such that the individual views of the source through each aperture can be decoded and separated. The codes are chosen to maximize the signal to noise ratio for each source distribution. These codes determine the photon collection efficiency of the aperture array. Planar arrays are used for volumetric reconstructions and circular arrays for cross-sectional reconstructions. 14 claims

  2. The status of intercultural mediation in translation: Is it an absolute licence?

    Directory of Open Access Journals (Sweden)

    Akbari Alireza

    2017-06-01

    Full Text Available The role of translator as Sprachmittler or intercultural mediator has welcomed much attention since the advent of the “cultural turn” paradigm. The present research paper seeks to figure out how the manifestations of intercultural mediation are achieved via translation in terms of two mediation facets, viz, personal and communicated interpretations. Whereas the former deals with the presence of the translator between the source and target cultures, the latter concerns the role of the reader of the translated text in the target language through several mediational strategies including: expansion, reframing, replacement, eschewing of dispreferred structure, and dispensation to capture the message of the source text. The rationale for focusing on these strategies lies in the fact that translators often utilize transliteration and literal translation strategies when it comes to cultural items and concepts. As far as review of the literature indicates, mediational translation has not received due attention in the Persian language since it differs in comparison with other languages such as English, French etc. In the case of language patterning, such study reveals some novel but applicable cultural translation strategies that highlight the nature of mediation in cultural translation.

  3. The IAEA code of conduct on the safety of radiation sources and the security of radioactive materials. A step forwards or backwards?

    International Nuclear Information System (INIS)

    Boustany, K.

    2001-01-01

    About the finalization of the Code of Conduct on the Safety and Security of radioactive Sources, it appeared that two distinct but interrelated subject areas have been identified: the prevention of accidents involving radiation sources and the prevention of theft or any other unauthorized use of radioactive materials. What analysis reveals is rather that there are gaps in both the content of the Code and the processes relating to it. Nevertheless, new standards have been introduced as a result of this exercise and have thus, as an enactment of what constitutes appropriate behaviour in the field of the safety and security of radioactive sources, emerged into the arena of international relations. (N.C.)

  4. Towards a classification of translation styles based on eye-tracking and keylogging data

    Directory of Open Access Journals (Sweden)

    Barbara Dragsted & Michael Carl

    2013-06-01

    Full Text Available This article seeks to formulate translator profiles based on process data from keylogging and eye-tracking, while at the same time identifying features which are shared by all translators in a sample consisting of both students and professionals. Data have been collected from 12 professional translators and 12 graduate students translating three texts of varying complexity. We found that individual behavioural characteristics with respect to initial orientation in the source text (ST, online ST reading, and online and end revision remained relatively constant across texts of varying complexity, supporting our hypothesis that translator profiles can be observed which are independent of the difficulty of the translation task. The analysis of the data also indicated that translators could be grouped into broad categories of locally-oriented and globally-oriented translation styles, which are partly, though not entirely, comparable to styles known from writing research. We also identified shared features with respect to reading and revision behaviour during drafting. Common to all translators was that they looked beyond the source text word they were about to translate, and that they made revisions while drafting the translation.

  5. Mapping Translation Technology Research in Translation Studies

    DEFF Research Database (Denmark)

    Schjoldager, Anne; Christensen, Tina Paulsen; Flanagan, Marian

    2017-01-01

    section aims to improve this situation by presenting new and innovative research papers that reflect on recent technological advances and their impact on the translation profession and translators from a diversity of perspectives and using a variety of methods. In Section 2, we present translation......Due to the growing uptake of translation technology in the language industry and its documented impact on the translation profession, translation students and scholars need in-depth and empirically founded knowledge of the nature and influences of translation technology (e.g. Christensen....../Schjoldager 2010, 2011; Christensen 2011). Unfortunately, the increasing professional use of translation technology has not been mirrored within translation studies (TS) by a similar increase in research projects on translation technology (Munday 2009: 15; O’Hagan 2013; Doherty 2016: 952). The current thematic...

  6. SMILEI: A collaborative, open-source, multi-purpose PIC code for the next generation of super-computers

    Science.gov (United States)

    Grech, Mickael; Derouillat, J.; Beck, A.; Chiaramello, M.; Grassi, A.; Niel, F.; Perez, F.; Vinci, T.; Fle, M.; Aunai, N.; Dargent, J.; Plotnikov, I.; Bouchard, G.; Savoini, P.; Riconda, C.

    2016-10-01

    Over the last decades, Particle-In-Cell (PIC) codes have been central tools for plasma simulations. Today, new trends in High-Performance Computing (HPC) are emerging, dramatically changing HPC-relevant software design and putting some - if not most - legacy codes far beyond the level of performance expected on the new and future massively-parallel super computers. SMILEI is a new open-source PIC code co-developed by both plasma physicists and HPC specialists, and applied to a wide range of physics-related studies: from laser-plasma interaction to astrophysical plasmas. It benefits from an innovative parallelization strategy that relies on a super-domain-decomposition allowing for enhanced cache-use and efficient dynamic load balancing. Beyond these HPC-related developments, SMILEI also benefits from additional physics modules allowing to deal with binary collisions, field and collisional ionization and radiation back-reaction. This poster presents the SMILEI project, its HPC capabilities and illustrates some of the physics problems tackled with SMILEI.

  7. Study of the source term of radiation of the CDTN GE-PET trace 8 cyclotron with the MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Benavente C, J. A.; Lacerda, M. A. S.; Fonseca, T. C. F.; Da Silva, T. A. [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Vega C, H. R., E-mail: jhonnybenavente@gmail.com [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)

    2015-10-15

    Full text: The knowledge of the neutron spectra in a PET cyclotron is important for the optimization of radiation protection of the workers and individuals of the public. The main objective of this work is to study the source term of radiation of the GE-PET trace 8 cyclotron of the Development Center of Nuclear Technology (CDTN/CNEN) using computer simulation by the Monte Carlo method. The MCNPX version 2.7 code was used to calculate the flux of neutrons produced from the interaction of the primary proton beam with the target body and other cyclotron components, during 18F production. The estimate of the source term and the corresponding radiation field was performed from the bombardment of a H{sub 2}{sup 18}O target with protons of 75 μA current and 16.5 MeV of energy. The values of the simulated fluxes were compared with those reported by the accelerator manufacturer (GE Health care Company). Results showed that the fluxes estimated with the MCNPX codes were about 70% lower than the reported by the manufacturer. The mean energies of the neutrons were also different of that reported by GE Health Care. It is recommended to investigate other cross sections data and the use of physical models of the code itself for a complete characterization of the source term of radiation. (Author)

  8. ANALYSIS ON THE TRANSLATION OF WORDPLAYS IN “THE GOD OF SMALL THINGS” BY ARUNDHATI ROY (Studies on the Wordplay Shifts and the Translation Quality

    Directory of Open Access Journals (Sweden)

    Nur Saptaningsih

    2017-04-01

    Full Text Available Wordplay commonly appears in literary works to enrich the works themselves with certain effect and nuance, either to make a joke or to conceal anything taboo. However, problems frequently occur in translation of wordplay and this becomes an obstacle for a translator to find proper equivalent. Moreover, the translation of wordplay is closely related to different language systems (source and target languages. Novel of ―The God of Small Things‖ by Arundhati Roy is rich in the use of wordplays, but there are a number of shifts found in the translated version of the wordplays. This paper deals with descriptivequalitative research aiming at investigating the shifts of wordplays in the novel of ―The God of Small Things‖ by Arundhati Roy and in the translated version. This study also highlights the contribution of the shifts on the translation quality, in terms of accuracy and acceptability. This product-oriented study applies embedded-case method. The first data for this research are documents, consisting of source text and the translated text. The other data are obtained from informants (raters and respondents, consisting of information dealing with accuracy and acceptability. The data are collected using document analyses, questionnaire, and interview. Purposive sampling and content analysis are applied.

  9. Improved finite-source inversion through joint measurements of rotational and translational ground motions: a numerical study

    Science.gov (United States)

    Reinwald, Michael; Bernauer, Moritz; Igel, Heiner; Donner, Stefanie

    2016-10-01

    With the prospects of seismic equipment being able to measure rotational ground motions in a wide frequency and amplitude range in the near future, we engage in the question of how this type of ground motion observation can be used to solve the seismic source inverse problem. In this paper, we focus on the question of whether finite-source inversion can benefit from additional observations of rotational motion. Keeping the overall number of traces constant, we compare observations from a surface seismic network with 44 three-component translational sensors (classic seismometers) with those obtained with 22 six-component sensors (with additional three-component rotational motions). Synthetic seismograms are calculated for known finite-source properties. The corresponding inverse problem is posed in a probabilistic way using the Shannon information content to measure how the observations constrain the seismic source properties. We minimize the influence of the source receiver geometry around the fault by statistically analyzing six-component inversions with a random distribution of receivers. Since our previous results are achieved with a regular spacing of the receivers, we try to answer the question of whether the results are dependent on the spatial distribution of the receivers. The results show that with the six-component subnetworks, kinematic source inversions for source properties (such as rupture velocity, rise time, and slip amplitudes) are not only equally successful (even that would be beneficial because of the substantially reduced logistics installing half the sensors) but also statistically inversions for some source properties are almost always improved. This can be attributed to the fact that the (in particular vertical) gradient information is contained in the additional motion components. We compare these effects for strike-slip and normal-faulting type sources and confirm that the increase in inversion quality for kinematic source parameters is

  10. Code-Mixing and Code Switchingin The Process of Learning

    Directory of Open Access Journals (Sweden)

    Diyah Atiek Mustikawati

    2016-09-01

    Full Text Available This study aimed to describe a form of code switching and code mixing specific form found in the teaching and learning activities in the classroom as well as determining factors influencing events stand out that form of code switching and code mixing in question.Form of this research is descriptive qualitative case study which took place in Al Mawaddah Boarding School Ponorogo. Based on the analysis and discussion that has been stated in the previous chapter that the form of code mixing and code switching learning activities in Al Mawaddah Boarding School is in between the use of either language Java language, Arabic, English and Indonesian, on the use of insertion of words, phrases, idioms, use of nouns, adjectives, clauses, and sentences. Code mixing deciding factor in the learning process include: Identification of the role, the desire to explain and interpret, sourced from the original language and its variations, is sourced from a foreign language. While deciding factor in the learning process of code, includes: speakers (O1, partners speakers (O2, the presence of a third person (O3, the topic of conversation, evoke a sense of humour, and just prestige. The significance of this study is to allow readers to see the use of language in a multilingual society, especially in AL Mawaddah boarding school about the rules and characteristics variation in the language of teaching and learning activities in the classroom. Furthermore, the results of this research will provide input to the ustadz / ustadzah and students in developing oral communication skills and the effectiveness of teaching and learning strategies in boarding schools.

  11. Analysis of source term aspects in the experiment Phebus FPT1 with the MELCOR and CFX codes

    Energy Technology Data Exchange (ETDEWEB)

    Martin-Fuertes, F. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain)]. E-mail: francisco.martinfuertes@upm.es; Barbero, R. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Martin-Valdepenas, J.M. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Jimenez, M.A. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain)

    2007-03-15

    Several aspects related to the source term in the Phebus FPT1 experiment have been analyzed with the help of MELCOR 1.8.5 and CFX 5.7 codes. Integral aspects covering circuit thermalhydraulics, fission product and structural material release, vapours and aerosol retention in the circuit and containment were studied with MELCOR, and the strong and weak points after comparison to experimental results are stated. Then, sensitivity calculations dealing with chemical speciation upon release, vertical line aerosol deposition and steam generator aerosol deposition were performed. Finally, detailed calculations concerning aerosol deposition in the steam generator tube are presented. They were obtained by means of an in-house code application, named COCOA, as well as with CFX computational fluid dynamics code, in which several models for aerosol deposition were implemented and tested, while the models themselves are discussed.

  12. Translating children's stories - reflections and practices

    Directory of Open Access Journals (Sweden)

    Muguras Constantinescu

    2015-08-01

    Full Text Available http://dx.doi.org/10.5007/2175-7968.2016v36n1p155 The present article is concerned with the specificity of translations for children with an emphasis on the cultural dimension to be preserved in the target text. Following a brief historical and theoretical overview on the issue of translating texts for children, we undertake a succint analysis on a corpus made up of tales which display an overtness to the other while treating the identity-alterity issue. Starting from our own translation practice, we will insist upon those strategies and techniques fit to render the cultural dimension of the source text in order to give young readers access to a foreign culture. Translational strategies are analysed along editorial and pedagogical strategies, with a special focus on the paratext which aims at satisfying the readers’ curiosity and contributing to the development of their encyclopedic competence.

  13. Translating Signs, Producing Subjects

    Directory of Open Access Journals (Sweden)

    Brett Neilson

    2009-08-01

    Full Text Available This paper moves between two streets: Liverpool Road in the Sydney suburb of Ashfield and Via Sarpi in the Italian city of Milan. What connects these streets is that both have become important sites for businesses in the Chinese diaspora. Moreover, both are streets on which locals have expressed desires for Chinese signs to be translated into the national lingua franca. The paper argues that the cultural politics inherent in this demand for translation cannot be fully understood in the context of national debates about diversity and integration. It is also necessary to consider the emergence of the official Chinese Putonghua as global language, which competes with English but also colonizes dialects and minority languages. In the case of these dual language signs, the space between languages can neither be reduced to a contact zone of minority and majority cultures nor celebrated as a ‘third space’ where the power relations implied by such differences are subverted. At stake is rather a space characterised by what Naoki Sakai calls the schema of co-figuration, which allows the representation of translation as the passage between two equivalents that resemble each other and thus makes possible their determination as conceptually different and comparable. Drawing on arguments about translation and citizenship, the paper critically interrogates the ethos of interchangeability implied by this regime of translation. A closing argument is made for a vision of the common that implies neither civilisational harmony nor the translation of all values into a general equivalent. Primary sources include government reports, internet texts and media stories. These are analyzed using techniques of discourse analysis and interpreted with the help of secondary literature concerning globalisation, language and migration. The disciplinary matrix cuts and mixes between cultural studies, translation studies, citizenship studies, globalization studies and

  14. Indlela or uhambo? Translator style in Mandela’s autobiography

    Directory of Open Access Journals (Sweden)

    Amanda Nokele

    2016-10-01

    Full Text Available One of the aspects that concerns translation scholars most is the question of the translator’s style. It was realised that little research had been undertaken investigating the individual style of literary translators in terms of what might be distinct about their language usage. Consequently, a methodological framework for such an investigation was suggested. Subsequently considerable research has been conducted on style in the European languages. However, the same cannot be said about African languages. This article proposes a corpus-driven study of translators’ style, comparing isiXhosa and isiZulu translations of Mandela’s Long Walk to Freedom by Mtuze and Ntuli, both published in 2001. The target texts are compared with each other focusing on the use of italics, loan words and expansions and contractions as features that distinguish the two translators. The source text was used not to evaluate the target texts but to understand the translators’ choices. ParaConc Multilingual Concordancer was used to align the source text and its target texts for easy examination. The results revealed that the fact that the two translators were dealing with an autobiography did not deter them from displaying their personal imprints as creative writers.

  15. Non-coding, mRNA-like RNAs database Y2K.

    Science.gov (United States)

    Erdmann, V A; Szymanski, M; Hochberg, A; Groot, N; Barciszewski, J

    2000-01-01

    In last few years much data has accumulated on various non-translatable RNA transcripts that are synthesised in different cells. They are lacking in protein coding capacity and it seems that they work mainly or exclusively at the RNA level. All known non-coding RNA transcripts are collected in the database: http://www. man.poznan.pl/5SData/ncRNA/index.html

  16. Tuning of Recombinant Protein Expression in Escherichia coli by Manipulating Transcription, Translation Initiation Rates, and Incorporation of Noncanonical Amino Acids.

    Science.gov (United States)

    Schlesinger, Orr; Chemla, Yonatan; Heltberg, Mathias; Ozer, Eden; Marshall, Ryan; Noireaux, Vincent; Jensen, Mogens Høgh; Alfonta, Lital

    2017-06-16

    Protein synthesis in cells has been thoroughly investigated and characterized over the past 60 years. However, some fundamental issues remain unresolved, including the reasons for genetic code redundancy and codon bias. In this study, we changed the kinetics of the Eschrichia coli transcription and translation processes by mutating the promoter and ribosome binding domains and by using genetic code expansion. The results expose a counterintuitive phenomenon, whereby an increase in the initiation rates of transcription and translation lead to a decrease in protein expression. This effect can be rescued by introducing slow translating codons into the beginning of the gene, by shortening gene length or by reducing initiation rates. On the basis of the results, we developed a biophysical model, which suggests that the density of co-transcriptional-translation plays a role in bacterial protein synthesis. These findings indicate how cells use codon bias to tune translation speed and protein synthesis.

  17. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    Science.gov (United States)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as

  18. Methodology and Toolset for Model Verification, Hardware/Software co-simulation, Performance Optimisation and Customisable Source-code generation

    DEFF Research Database (Denmark)

    Berger, Michael Stübert; Soler, José; Yu, Hao

    2013-01-01

    The MODUS project aims to provide a pragmatic and viable solution that will allow SMEs to substantially improve their positioning in the embedded-systems development market. The MODUS tool will provide a model verification and Hardware/Software co-simulation tool (TRIAL) and a performance...... optimisation and customisable source-code generation tool (TUNE). The concept is depicted in automated modelling and optimisation of embedded-systems development. The tool will enable model verification by guiding the selection of existing open-source model verification engines, based on the automated analysis...

  19. Studies of plasmonic hot-spot translation by a metal-dielectric layered superlens

    DEFF Research Database (Denmark)

    Thoreson, Mark D.; Nielsen, Rasmus Bundgaard; West, Paul R.

    2011-01-01

    at a wavelength of about 680 nm. Specifically, we discuss our recent experimental and simulation results on the translation of hot spots using a silver-silica layered superlens design. We compare the experimental results with our numerical simulations and discuss the perspectives and limitations of our approach....... optical nanoantennas as sources, we investigated the translation of these sources to the far side of a layered silver-silica superlens operating in the canalization regime. Using near-field scanning optical microscopy (NSOM), we have observed evidence of superlens-enabled enhanced-field translation...

  20. Explicitation in Translation: Culture-Specific Items from Persian into English

    OpenAIRE

    MORADI, Neda; RAHBAR, Muhamad; OLFATI, Mohsen

    2015-01-01

    Abstract. Investigating the concept of explicitation has been the center of attention to many scholars recent years. A lot of studies have been carried out dealing with the exploitation of explicitation on the culture-specific items on the translation of English source text into Persian. However, few studies has been carried out investigating such culture-specific items on Persian source text translated into English. The present article aims at providing sufficient data to cover the identific...

  1. The Development and Current State of Translation Process Research

    DEFF Research Database (Denmark)

    Lykke Jakobsen, Arnt

    2014-01-01

    The development and current state of translation process research ch Arnt Lykke Jakobsen Copenhagen Business School lInterest in process-oriented translation studies has been intense for the past almost half a century. Translation process research (TPR) is the label we have used to refer to a spe...... itself, into regions like cognitive psychology, psycho- and neurolinguistics, and neuroscience, where the interest in what goes on in our heads is also very strong.......The development and current state of translation process research ch Arnt Lykke Jakobsen Copenhagen Business School lInterest in process-oriented translation studies has been intense for the past almost half a century. Translation process research (TPR) is the label we have used to refer...... which simultaneously tracks the translator’s eye movements across a screen displaying both a source text and the translator’s emerging translation. This research method was developed as a means of qualifying and strengthening translation process hypotheses based on verbal reports by providing additional...

  2. Study on the Ethical Implication of Translation Activities%论翻译中的伦理关涉

    Institute of Scientific and Technical Information of China (English)

    杨荣广

    2012-01-01

    翻译活动和伦理规范一样,牵涉的都是人与人之间的关系。伦理价值作为人类最基本的行为规范,白然也规约和调节着翻译活动的进行。这种规约和调节涉及到译者的职业伦理道德,译料的伦理思想。翻译策略所反映的对“异质”“他者”的伦理态度等等。文章从翻译理论中关涉伦理的译论出发,探讨了伦理因素在译料选择、翻译策略、译文接受三方面对翻译活动的影响。%Translation activities and ethical norms are similar in that both entail the involvement of interpersonal relationship. Ethical judgment,as the most fundamental codes of conducts also interact and influence the proceeding of translation activities. The influence of ethics on translation can be seen from translators' professional ethics,per- sonal ethical values and the translators' attitude towards "the other" or "the foreign". The paper, first presents a historical review of theoretical discussions of ethics in translation theories and then discusses in detail the influence of ethical factors on the selection of source text, translation strategy and the acceptance of target text in translation activities.

  3. Tinkering with Translation: Protein Synthesis in Virus-Infected Cells

    Science.gov (United States)

    Walsh, Derek; Mathews, Michael B.; Mohr, Ian

    2013-01-01

    Viruses are obligate intracellular parasites, and their replication requires host cell functions. Although the size, composition, complexity, and functions encoded by their genomes are remarkably diverse, all viruses rely absolutely on the protein synthesis machinery of their host cells. Lacking their own translational apparatus, they must recruit cellular ribosomes in order to translate viral mRNAs and produce the protein products required for their replication. In addition, there are other constraints on viral protein production. Crucially, host innate defenses and stress responses capable of inactivating the translation machinery must be effectively neutralized. Furthermore, the limited coding capacity of the viral genome needs to be used optimally. These demands have resulted in complex interactions between virus and host that exploit ostensibly virus-specific mechanisms and, at the same time, illuminate the functioning of the cellular protein synthesis apparatus. PMID:23209131

  4. A Flexible Statechart-to-Model-Checker Translator

    Science.gov (United States)

    Rouquette, Nicolas; Dunphy, Julia; Feather, Martin S.

    2000-01-01

    Many current-day software design tools offer some variant of statechart notation for system specification. We, like others, have built an automatic translator from (a subset of) statecharts to a model checker, for use to validate behavioral requirements. Our translator is designed to be flexible. This allows us to quickly adjust the translator to variants of statechart semantics, including problem-specific notational conventions that designers employ. Our system demonstration will be of interest to the following two communities: (1) Potential end-users: Our demonstration will show translation from statecharts created in a commercial UML tool (Rational Rose) to Promela, the input language of Holzmann's model checker SPIN. The translation is accomplished automatically. To accommodate the major variants of statechart semantics, our tool offers user-selectable choices among semantic alternatives. Options for customized semantic variants are also made available. The net result is an easy-to-use tool that operates on a wide range of statechart diagrams to automate the pathway to model-checking input. (2) Other researchers: Our translator embodies, in one tool, ideas and approaches drawn from several sources. Solutions to the major challenges of statechart-to-model-checker translation (e.g., determining which transition(s) will fire, handling of concurrent activities) are retired in a uniform, fully mechanized, setting. The way in which the underlying architecture of the translator itself facilitates flexible and customizable translation will also be evident.

  5. PRIMUS: a computer code for the preparation of radionuclide ingrowth matrices from user-specified sources

    International Nuclear Information System (INIS)

    Hermann, O.W.; Baes, C.F. III; Miller, C.W.; Begovich, C.L.; Sjoreen, A.L.

    1984-10-01

    The computer program, PRIMUS, reads a library of radionuclide branching fractions and half-lives and constructs a decay-chain data library and a problem-specific decay-chain data file. PRIMUS reads the decay data compiled for 496 nuclides from the Evaluated Nuclear Structure Data File (ENSDF). The ease of adding radionuclides to the input library allows the CRRIS system to further expand its comprehensive data base. The decay-chain library produced is input to the ANEMOS code. Also, PRIMUS produces a data set reduced to only the decay chains required in a particular problem, for input to the SUMIT, TERRA, MLSOIL, and ANDROS codes. Air concentrations and deposition rates from the PRIMUS decay-chain data file. Source term data may be entered directly to PRIMUS to be read by MLSOIL, TERRA, and ANDROS. The decay-chain data prepared by PRIMUS is needed for a matrix-operator method that computes either time-dependent decay products from an initial concentration generated from a constant input source. This document describes the input requirements and the output obtained. Also, sections are included on methods, applications, subroutines, and sample cases. A short appendix indicates a method of utilizing PRIMUS and the associated decay subroutines from TERRA or ANDROS for applications to other decay problems. 18 references

  6. Some reflections on Romanian translation studies

    Directory of Open Access Journals (Sweden)

    Magda Jeanrenaud

    2015-07-01

    Full Text Available In this paper, I decided to examine two “verdicts” on the specificity (or the lack of specificity of Romanian translation studies in order to define and explain the present situation in this field: one given in the Encyclopedia of Translation Studies, the other in a book on “Romanian translation ideas and meta-ideas”. I believe that the current situation justifies the following hypothesis: the current specific of translation theories in the Romanian space is entailed by the existence of two circuits. The first is academic and international, aiming at including the Romanian research in European directions, by assimilating them more or less. The second is a national circuit—where the positions expressed within the first circuit penetrate indirectly, through “central” languages—and it manifests itself as selective and elective affinities between the two; their interaction is sporadic and it occurs mainly through other languages, not through an interiorization process related to the language of the Romanian source space.

  7. The third language: A recurrent textual restriction that translators come across in audiovisual translation.

    Directory of Open Access Journals (Sweden)

    Montse Corrius Gimbert

    2005-01-01

    Full Text Available If the process of translating is not at all simple, the process of translating an audiovisual text is still more complex. Apart rom technical problems such as lip synchronisation, there are other factors to be considered such as the use of the language and textual structures deemed appropriate to the channel of communication. Bearing in mind that most of the films we are continually seeing on our screens were and are produced in the United States, there is an increasing need to translate them into the different languages of the world. But sometimes the source audiovisual text contains more than one language, and, thus, a new problem arises: the ranslators face additional difficulties in translating this “third language” (language or dialect into the corresponding target culture. There are many films containing two languages in the original version but in this paper we will focus mainly on three films: Butch Cassidy and the Sundance Kid (1969, Raid on Rommel (1999 and Blade Runner (1982. This paper aims at briefly illustrating different solutions which may be applied when we come across a “third language”.

  8. BLT [Breach, Leach, and Transport]: A source term computer code for low-level waste shallow land burial

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1990-01-01

    This paper discusses the development of a source term model for low-level waste shallow land burial facilities and separates the problem into four individual compartments. These are water flow, corrosion and subsequent breaching of containers, leaching of the waste forms, and solute transport. For the first and the last compartments, we adopted the existing codes, FEMWATER and FEMWASTE, respectively. We wrote two new modules for the other two compartments in the form of two separate Fortran subroutines -- BREACH and LEACH. They were incorporated into a modified version of the transport code FEMWASTE. The resultant code, which contains all three modules of container breaching, waste form leaching, and solute transport, was renamed BLT (for Breach, Leach, and Transport). This paper summarizes the overall program structure and logistics, and presents two examples from the results of verification and sensitivity tests. 6 refs., 7 figs., 1 tab

  9. Cause for concern? Attitudes towards translation crowdsourcing in professional translators’ blogs

    DEFF Research Database (Denmark)

    Flanagan, Marian

    2016-01-01

    This paper seeks to identify professional translators’ attitudes towards the practice of translation crowdsourcing. The data consist of 48 professional translator blogs. A thematic analysis of their blog posts highlights three main findings: translation crowdsourcing can enhance visibility...... do not openly discuss their motives for differentiating between the various non-profit initiatives, and while there is much discussion on translation crowdsourcing for humanitarian causes, little or no attention is paid to free and open source software projects....

  10. Transparent ICD and DRG coding using information technology: linking and associating information sources with the eXtensible Markup Language.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.

  11. The Open Translation MOOC: Creating Online Communities to Transcend Linguistic Barriers

    Science.gov (United States)

    Beaven, Tita; Comas-Quinn, Anna; Hauck, Mirjam; de los Arcos, Beatriz; Lewis, Timothy

    2013-01-01

    One of the main barriers to the reuse of Open Educational Resources (OER) is language (OLnet, 2009). OER may be available but in a language that users cannot access, so a preliminary step to reuse is their translation or localization. One of the obvious solutions to the vast effort required to translate OER is to crowd-source the translation, as…

  12. Version 4. 00 of the MINTEQ geochemical code

    Energy Technology Data Exchange (ETDEWEB)

    Eary, L.E.; Jenne, E.A.

    1992-09-01

    The MINTEQ code is a thermodynamic model that can be used to calculate solution equilibria for geochemical applications. Included in the MINTEQ code are formulations for ionic speciation, ion exchange, adsorption, solubility, redox, gas-phase equilibria, and the dissolution of finite amounts of specified solids. Since the initial development of the MINTEQ geochemical code, a number of undocumented versions of the source code and data files have come into use at the Pacific Northwest Laboratory (PNL). This report documents these changes, describes source code modifications made for the Aquifer Thermal Energy Storage (ATES) program, and provides comprehensive listings of the data files. A version number of 4.00 has been assigned to the MINTEQ source code and the individual data files described in this report.

  13. Version 4.00 of the MINTEQ geochemical code

    Energy Technology Data Exchange (ETDEWEB)

    Eary, L.E.; Jenne, E.A.

    1992-09-01

    The MINTEQ code is a thermodynamic model that can be used to calculate solution equilibria for geochemical applications. Included in the MINTEQ code are formulations for ionic speciation, ion exchange, adsorption, solubility, redox, gas-phase equilibria, and the dissolution of finite amounts of specified solids. Since the initial development of the MINTEQ geochemical code, a number of undocumented versions of the source code and data files have come into use at the Pacific Northwest Laboratory (PNL). This report documents these changes, describes source code modifications made for the Aquifer Thermal Energy Storage (ATES) program, and provides comprehensive listings of the data files. A version number of 4.00 has been assigned to the MINTEQ source code and the individual data files described in this report.

  14. From Adaptation to Appropriation: Framing the World Through News Translation

    Directory of Open Access Journals (Sweden)

    Valdeón Roberto A.

    2014-02-01

    Full Text Available Terminological issues are problematic in the analysis of translation processes in news production. In the 1980s, Stetting coined the term “transediting”, which has been widely used in the translation studies literature, but “translation” itself becomes contentious in communication studies, a discipline closely related to news translation research. Only a few communication scholars have specifically dealt with the linguistic and cultural transformations of source texts, but they tend to regard translation as word-for-word transfer, unusual news production. More productive for the study of news translation seems to be the application of the concept of framing, widely used in communication studies. Framing considers the linguistic and paralinguistic elements of news texts in the promotion of certain organizing ideas that the target audience can identify with. In news translation, this entails the adaptation of a text for the target readership, a process can lead to appropriation of source material. Two examples are mentioned to illustrate this point: the appropriation of the US Department of State cables by the Wikileak organisation, and the pro-Romanian slogans produced by the Gandul newspaper as a response to Britain’s anti-immigration campaigns. The final section relates news adaptation to adaptation of other text types, such as literary and historical works.

  15. Gazing and Typing Activities during Translation

    DEFF Research Database (Denmark)

    Carl, Michael; Kay, Martin

    2011-01-01

    The paper investigates the notion of Translation Units (TUs) from a cognitive angle. A TU is defined as the translator’s focus of attention at a time. Since attention can be directed towards source text (ST) understanding and/or target text (TT) production, we analyze the activity data...... of a 160 word text we find major differences between students and professionals: Experienced professional translators are better able to divide their attention in parallel on ST reading (comprehension) and TT production, while students operate more in an alternating mode where they either read the ST...

  16. Understanding Translation

    DEFF Research Database (Denmark)

    Schjoldager, Anne Gram; Gottlieb, Henrik; Klitgård, Ida

    Understanding Translation is designed as a textbook for courses on the theory and practice of translation in general and of particular types of translation - such as interpreting, screen translation and literary translation. The aim of the book is to help you gain an in-depth understanding...... of the phenomenon of translation and to provide you with a conceptual framework for the analysis of various aspects of professional translation. Intended readers are students of translation and languages, but the book will also be relevant for others who are interested in the theory and practice of translation...... - translators, language teachers, translation users and literary, TV and film critics, for instance. Discussions focus on translation between Danish and English....

  17. What does Attention in Neural Machine Translation Pay Attention to?

    NARCIS (Netherlands)

    Ghader, H.; Monz, C.; Kondrak, G.; Watanabe, T.

    2017-01-01

    Attention in neural machine translation provides the possibility to encode relevant parts of the source sentence at each translation step. As a result, attention is considered to be an alignment model as well. However, there is no work that specifically studies attention and provides analysis of

  18. Deconstructing Equivalence in the Translation of Texts from French to Indonesian

    Directory of Open Access Journals (Sweden)

    Sajarwa Sajarwa

    2017-06-01

    Full Text Available Translation is a process of reproducing a source text (ST in the equivalent target text (TT. The equivalence of translation includes the message of the text. Several factors such as writer, translator, publisher, reader, or spirit of certain era, determine the translation equivalency. In translation, equivalence is negotiated and transactioned; in consequence it is highly likely that the current equivalency will be different in the future. Deconstruction theory claims that the relationship between a signifier and a signified is inconstant; however, it can be “deferred” to obtain a new or different relationship. As a result, a meaning may change in accordance with the will of its user. The result of this research indicates four differences between TT1 and TT2 translation; (1 within a period of twenty years of social and political change (1990 – 2010, TT1 reveals regional issues, while TT2 reveals social class issues; (2 the TT2’s disclosure of meaning is more direct, open, and occasionally rude than the subtle and euphemistic TT1; (3 the TT2 tends to follow ideology of foreignization by inserting foreign words or words from the source language, while the TT1 tends to follow ideology of domestication; (4 there are different viewpoints between the TT1 translator and the TT2 translator.

  19. Secure and Practical Defense Against Code-Injection Attacks using Software Dynamic Translation

    Science.gov (United States)

    2006-06-16

    Cache inst1 inst2 … instx inst3 inst4 cmpl %eax,%ecx trampoline Code Fragment1 inst7 inst8 … trampoline Code Fragment2 Context Switch Fetch Decode...inst4 cmpl %eax,%ecx bne L4 inst5 inst6 … jmp L8 L4: inst7 inst8 … Application Text CFn CFn+1 CFn+2 CFn+3 CFn+4 CFn+5 CFn+x inst5 inst6 … trampoline ...and client configurations was motivated by our desire to measure the processor over- head imposed by the Strata VM. Providing the server twice as much

  20. On fuzzy semantic similarity measure for DNA coding.

    Science.gov (United States)

    Ahmad, Muneer; Jung, Low Tang; Bhuiyan, Md Al-Amin

    2016-02-01

    A coding measure scheme numerically translates the DNA sequence to a time domain signal for protein coding regions identification. A number of coding measure schemes based on numerology, geometry, fixed mapping, statistical characteristics and chemical attributes of nucleotides have been proposed in recent decades. Such coding measure schemes lack the biologically meaningful aspects of nucleotide data and hence do not significantly discriminate coding regions from non-coding regions. This paper presents a novel fuzzy semantic similarity measure (FSSM) coding scheme centering on FSSM codons׳ clustering and genetic code context of nucleotides. Certain natural characteristics of nucleotides i.e. appearance as a unique combination of triplets, preserving special structure and occurrence, and ability to own and share density distributions in codons have been exploited in FSSM. The nucleotides׳ fuzzy behaviors, semantic similarities and defuzzification based on the center of gravity of nucleotides revealed a strong correlation between nucleotides in codons. The proposed FSSM coding scheme attains a significant enhancement in coding regions identification i.e. 36-133% as compared to other existing coding measure schemes tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. About the Definition, Classification, and Translation Strategies of Idioms

    Directory of Open Access Journals (Sweden)

    Kovács Gabriella

    2016-12-01

    Full Text Available In translator training, the process of planning and implementing the teaching process and the design of teaching materials should be dynamic and flexible. With the future purpose to design teaching materials for idiom translation, this study proposes to explore those characteristics of idioms which might cause difficulties when translating them and some of the various classifications of idioms. Some of the relevant factors which might determine the appropriateness and acceptability of idiom translation and some of the translation strategies recommended in the specialized literature will also be presented. We proposed to analyse the idiom-translating solutions and strategies which the literary translator of the novel “A Game of Thrones” chose while translating it into Hungarian. Our aim is to decide whether the novel can be an appropriate source for authentic teaching material. We chose this novel mainly because it has recently become very popular among students, it is rich in idioms, and we believe that different aspects of idiom typology and different strategies used in idiom translation can be exemplified, demonstrated, and practised with the help of different tasks based on the original text and its Hungarian translation.

  2. Network coding for multi-resolution multicast

    DEFF Research Database (Denmark)

    2013-01-01

    A method, apparatus and computer program product for utilizing network coding for multi-resolution multicast is presented. A network source partitions source content into a base layer and one or more refinement layers. The network source receives a respective one or more push-back messages from one...... or more network destination receivers, the push-back messages identifying the one or more refinement layers suited for each one of the one or more network destination receivers. The network source computes a network code involving the base layer and the one or more refinement layers for at least one...... of the one or more network destination receivers, and transmits the network code to the one or more network destination receivers in accordance with the push-back messages....

  3. On the Need of Novel Medium Access Control Schemes for Network Coding enabled Wireless Mesh Networks

    DEFF Research Database (Denmark)

    Paramanathan, Achuthan; Pahlevani, Peyman; Roetter, Daniel Enrique Lucani

    2013-01-01

    that network coding will improve the throughput in such systems, but our novel medium access scheme improves the performance in the cross topology by another 66 % for network coding and 150 % for classical forwarding in theory. These gains translate in a theoretical gain of 33 % of network coding over...

  4. SOURCE LANGUAGE TEXT, PARALELL TEXT, AND MODEL TRANSLATED TEXT: A PILOT STUDY IN TEACHING TRANSLATION TEXTO LENGUA ORIGEN, TEXTO PARALELO Y TEXTO TRADUCIDO MODELO. ESTUDIO PILOTO EN LA ENSEÑANZA DE LA TRADUCCIÓN

    Directory of Open Access Journals (Sweden)

    Sergio Bolaños Cuéllar

    2007-12-01

    Full Text Available The advance in cultural-oriented perspectives in Translation Studies has sometimes played down the text linguistic nature of translation. A pilot study in teaching translation was carried out to make students aware of the text linguistic character of translating and help them to improve their translation skills, particularly with an emphasis on self-awareness and self-correcting strategies. The theoretical background is provided by the Dynamic Translation Model (2004, 2005 proposed by the author, with relevant and important contributions taken from Genette's (1982 transtextuality phenomena (hypertext, hypotext, metatext, paratext, intertext and House and Kasper's (1981 pragmatic modality markers (downgraders, upgraders. The key conceptual role of equivalence as a defining feature of translation is also dealt with. The textual relationship between Source Language Text (SLT is deemed to be pivotal for performing translation and correction tasks in the classroom. Finally, results of the pilot study are discussed and some conclusions are drawn.El desarrollo de las teorías traductológicas orientadas hacia la cultura en ocasiones ha opacado la naturaleza textolingüística de la traducción. Se llevó a cabo un estudio piloto para la enseñanza de la traducción con el fin de recalcar entre los estudiantes el carácter textolingüístico de la labor de traducción y para ayudarles a mejorar sus habilidades de traducción, con especial énfasis en las estrategias de autoconciencia y autocorrección. El marco teórico proviene del Modelo Traductológico Dinámico (2004, 2005, propuesto por el autor, con destacados aportes tomados de los fenómenos de transtextualidad de Genette (1982 (hipertexto, hipotexto, metatexto, paratexto, intertexto y de los marcadores de modalidad pragmática de House y Kasper (1981 (atenuadores, intensificadores. También se aborda el papel conceptual fundamental de la equivalencia como rasgo determinante de la traducci

  5. Pre-Test Analysis of the MEGAPIE Spallation Source Target Cooling Loop Using the TRAC/AAA Code

    International Nuclear Information System (INIS)

    Bubelis, Evaldas; Coddington, Paul; Leung, Waihung

    2006-01-01

    A pilot project is being undertaken at the Paul Scherrer Institute in Switzerland to test the feasibility of installing a Lead-Bismuth Eutectic (LBE) spallation target in the SINQ facility. Efforts are coordinated under the MEGAPIE project, the main objectives of which are to design, build, operate and decommission a 1 MW spallation neutron source. The technology and experience of building and operating a high power spallation target are of general interest in the design of an Accelerator Driven System (ADS) and in this context MEGAPIE is one of the key experiments. The target cooling is one of the important aspects of the target system design that needs to be studied in detail. Calculations were performed previously using the RELAP5/Mod 3.2.2 and ATHLET codes, but in order to verify the previous code results and to provide another capability to model LBE systems, a similar study of the MEGAPIE target cooling system has been conducted with the TRAC/AAA code. In this paper a comparison is presented for the steady-state results obtained using the above codes. Analysis of transients, such as unregulated cooling of the target, loss of heat sink, the main electro-magnetic pump trip of the LBE loop and unprotected proton beam trip, were studied with TRAC/AAA and compared to those obtained earlier using RELAP5/Mod 3.2.2. This work extends the existing validation data-base of TRAC/AAA to heavy liquid metal systems and comprises the first part of the TRAC/AAA code validation study for LBE systems based on data from the MEGAPIE test facility and corresponding inter-code comparisons. (authors)

  6. An evaluation of a translator for finite element data to resistor/capacitor data for the heat diffusion equation

    International Nuclear Information System (INIS)

    Manteufel, R.D.; Klein, D.E.; Yoshimura, H.R.

    1988-01-01

    This paper evaluates a translator for finite element data to resistor/capacitor data (FEM/RC) for the numerical solution of heat diffusion problems. The translator involves the derivation of thermal resistors and capacitors, implicit in the heat balance formulation of the finite difference method. It uses a finite element mesh, which consists of nodes and elements and is implicit in the Galerkin finite element method (GFEM). This hybrid translation method, FEM/RC, has been incorporated in Q/TRAN, a new thermal analysis computer code. This evaluation compares Q/TRAN, HEATING-6, and a research code employing GFEM on a purely mathematical, highly nonlinear steady-state conduction benchmark problem. The evaluation concludes that the FEM/RC technique has numerical characteristics that are consistent with comparable schemes for the benchmark problem. FEM/RC also accurately translates skewed meshes. Because FEM/RC generates resistors and capacitors, it appears to offer a more efficient method than the classical GFEM

  7. Application of LSP texts in translator training

    Directory of Open Access Journals (Sweden)

    Larisa Ilynska

    2017-06-01

    Full Text Available The paper presents discussion of the results of extensive empirical research into efficient methods of educating and training translators of LSP (language for special purposes texts. The methodology is based on using popular LSP texts in the respective fields as one of the main media for translator training. The aim of the paper is to investigate the efficiency of this methodology in developing thematic, linguistic and cultural competences of the students, following Bloom’s revised taxonomy and European Master in Translation Network (EMT translator training competences. The methodology has been tested on the students of a professional Master study programme called Technical Translation implemented by the Institute of Applied Linguistics, Riga Technical University, Latvia. The group of students included representatives of different nationalities, translating from English into Latvian, Russian and French. Analysis of popular LSP texts provides an opportunity to structure student background knowledge and expand it to account for linguistic innovation. Application of popular LSP texts instead of purely technical or scientific texts characterised by neutral style and rigid genre conventions provides an opportunity for student translators to develop advanced text processing and decoding skills, to develop awareness of expressive resources of the source and target languages and to develop understanding of socio-pragmatic language use.

  8. Text mining a self-report back-translation.

    Science.gov (United States)

    Blanch, Angel; Aluja, Anton

    2016-06-01

    There are several recommendations about the routine to undertake when back translating self-report instruments in cross-cultural research. However, text mining methods have been generally ignored within this field. This work describes a text mining innovative application useful to adapt a personality questionnaire to 12 different languages. The method is divided in 3 different stages, a descriptive analysis of the available back-translated instrument versions, a dissimilarity assessment between the source language instrument and the 12 back-translations, and an item assessment of item meaning equivalence. The suggested method contributes to improve the back-translation process of self-report instruments for cross-cultural research in 2 significant intertwined ways. First, it defines a systematic approach to the back translation issue, allowing for a more orderly and informed evaluation concerning the equivalence of different versions of the same instrument in different languages. Second, it provides more accurate instrument back-translations, which has direct implications for the reliability and validity of the instrument's test scores when used in different cultures/languages. In addition, this procedure can be extended to the back-translation of self-reports measuring psychological constructs in clinical assessment. Future research works could refine the suggested methodology and use additional available text mining tools. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Domain Specific Language Support for Exascale. Final Project Report

    Energy Technology Data Exchange (ETDEWEB)

    Baden, Scott [Univ. of California, San Diego, CA (United States)

    2017-07-11

    The project developed a domain specific translator enable legacy MPI source code to tolerate communication delays, which are increasing over time due to technological factors. The translator performs source-to-source translation that incorporates semantic information into the translation process. The output of the translator is a C program runs as a data driven program, and uses an existing run time to overlap communication automatically

  10. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    Science.gov (United States)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  11. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  12. Regulation of mRNA translation influences hypoxia tolerance

    International Nuclear Information System (INIS)

    Koritzinsky, M.; Wouters, B.G.; Koumenis, C.

    2003-01-01

    Hypoxia is a heterogenous but common characteristic of human tumours and poor oxygenation is associated with poor prognosis. We believe that the presence of viable hypoxic tumor cells reflects in part an adaptation and tolerance of these cells to oxygen deficiency. Since oxidative phosphorylation is compromized during hypoxia, adaptation may involve both the upregulation of glycolysis as well as downregulation of energy consumption. mRNA translation is one of the most energy costly cellular processes, and we and others have shown that global mRNA translation is rapidly inhibited during hypoxia. However, some mRNAs, including those coding for HIF-1 α and VEGF, remain efficiently translated during hypoxia. Clearly, the mechanisms responsible for the overall inhibition of translation during hypoxia does not compromize the translation of certain hypoxia-induced mRNA species. We therefore hypothesize that the inhibition of mRNA translation serves to promote hypoxia tolerance in two ways: i) through conservation of energy and ii) through differential gene expression involved in hypoxia adaptation. We have recently identified two pathways that are responsible for the global inhibition of translation during hypoxia. The phosphorylation of the eukaryotic initiation factor eIF2 α by the ER resident kinase PERK results in down-regulation of protein synthesis shortly after the onset of hypoxia. In addition, the initiation complex eIF4F is disrupted during long lasting hypoxic conditions. The identification of the molecular pathways responsible for the inhibition of overall translation during hypoxia has rendered it possible to investigate their importance for hypoxia tolerance. We have found that mouse embryo fibroblasts that are knockout for PERK and therefore not able to inhibit protein synthesis efficiently during oxygen deficiency are significantly less tolerant to hypoxia than their wildtype counterparts. We are currently also investigating the functional significance

  13. Bible translations into Italian (20th century

    Directory of Open Access Journals (Sweden)

    Ryszard Wróbel

    2011-09-01

    Full Text Available Discussing twentieth-century translations of the Bible into Italian we have to make a crucial distinction: there are different translations and different editions; the latter are more numerous, as the same translation may appear in different forms. For many of them it is difficult to determine to whom they are addressed: some of the features show a broad willingness to promote the content of the Bible, while others make them a tool only for a group of specialists. The article discusses the issue of the Bibles, which were printed in Italy in the twentieth century; there were 27. It deliberately does not include translations and elaborations less prevalent or partial studies for professionals. The information is presented in a tangible and transparent scheme, which facilitates their mutual compatibility. Each description contains the name or title of work, author’s name, place and date of publication, publisher’s name, names of translators, editors, source of translation, editing characteristics, and other observations.

  14. RIES - Rijnland Internet Election System: A Cursory Study of Published Source Code

    Science.gov (United States)

    Gonggrijp, Rop; Hengeveld, Willem-Jan; Hotting, Eelco; Schmidt, Sebastian; Weidemann, Frederik

    The Rijnland Internet Election System (RIES) is a system designed for voting in public elections over the internet. A rather cursory scan of the source code to RIES showed a significant lack of security-awareness among the programmers which - among other things - appears to have left RIES vulnerable to near-trivial attacks. If it had not been for independent studies finding problems, RIES would have been used in the 2008 Water Board elections, possibly handling a million votes or more. While RIES was more extensively studied to find cryptographic shortcomings, our work shows that more down-to-earth secure design practices can be at least as important, and the aspects need to be examined much sooner than right before an election.

  15. Current and future translation trends in aeronautics and astronautics

    Science.gov (United States)

    Rowe, Timothy

    1986-01-01

    The pattern of translation activity in aeronautics and astronautics is reviewed. It is argued that the international nature of the aerospace industry and the commercialization of space have increased the need for the translation of scientific literature in the aerospace field. Various factors which can affect the quality of translations are examined. The need to translate the activities of the Soviets, Germans, and French in materials science in microgravity, of the Japanese, Germans, and French in the development of industrial ceramics, and of the Chinese in launching and communications satellites is discussed. It is noted that due to increases in multilateral and bilateral relationships in the aerospace industry, the amount of translation from non-English source material into non-English text will increase and the most important languages will be French and German, with an increasing demand for Japanese, Chinese, Spanish, and Italian translations.

  16. English to Sanskrit Machine Translation Using Transfer Based approach

    Science.gov (United States)

    Pathak, Ganesh R.; Godse, Sachin P.

    2010-11-01

    Translation is one of the needs of global society for communicating thoughts and ideas of one country with other country. Translation is the process of interpretation of text meaning and subsequent production of equivalent text, also called as communicating same meaning (message) in another language. In this paper we gave detail information on how to convert source language text in to target language text using Transfer Based Approach for machine translation. Here we implemented English to Sanskrit machine translator using transfer based approach. English is global language used for business and communication but large amount of population in India is not using and understand the English. Sanskrit is ancient language of India most of the languages in India are derived from Sanskrit. Sanskrit can be act as an intermediate language for multilingual translation.

  17. DISSEMINATING MULTICULTURALISM THROUGH THE TEACHING OF TRANSLATION

    Directory of Open Access Journals (Sweden)

    Arido Laksono

    2013-12-01

    Full Text Available 800x600 ABSTRACT Students are expected to change the world. Their perspectives represent the way they view the world and its phenomena. The broader knowledge they possess, the more tolerance they have in interpreting life. In the global era, students should understand the importance of having good knowledge in multiculturalism. They will involve in an inter-cultural encounter since sources of information are widely offered. The willingness to have such an open mind is required in order to develop a better place to live and work. One way to disseminate multiculturalism values is using text containing information about culture and social values. The text written in English or Bahasa Indonesia for the class designed in disseminating multiculturalism is Translation. Here, students are taught to interpret the messages conveyed and translate the information from the source language to the target language correctly. Teacher must have good and creative technique in delivering the material so that students really enjoy the class and deeply understand the topic. The teaching and learning process in Translation class, therefore, is an effective medium to achieve the expected purpose as stated above. Theory of translation will not be the one and only theory to do the translation job, but it also needs comprehensive knowledge on other social sciences. Hence, translation class will not only discuss lines of words in a paragraph, but also reciprocal discussion among the members of the class. At the end, students will have the ability to translate such information in a text correctly and to establish civic society with more open comprehension over society and its culture. Keywords: theory of translation, multiculturalism, teaching-learning process, globalization. Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0

  18. Hybrid 3D Fractal Coding with Neighbourhood Vector Quantisation

    Directory of Open Access Journals (Sweden)

    Zhen Yao

    2004-12-01

    Full Text Available A hybrid 3D compression scheme which combines fractal coding with neighbourhood vector quantisation for video and volume data is reported. While fractal coding exploits the redundancy present in different scales, neighbourhood vector quantisation, as a generalisation of translational motion compensation, is a useful method for removing both intra- and inter-frame coherences. The hybrid coder outperforms most of the fractal coders published to date while the algorithm complexity is kept relatively low.

  19. FRC [field-reversed configuration] translation studies on FRX-C/LSM

    International Nuclear Information System (INIS)

    Rej, D.; Barnes, G.; Baron, M.

    1989-01-01

    In preparation for upcoming compression-heating experiments, field-reversed configurations (FRCs) have been translated out of the FRX-C/LSM θ-pinch source, and into the 0.4-m-id, 6.7-m-long translation region formerly used on FRX-C/T. Unlike earlier experiments FRCs are generated without magnetic tearing in the larger FRX-C/LSM source (nominal coil id = 0.70 m, length = 2 m); larger, lower-energy-density FRCs are formed: r/sub s/ ≅ 0.17 m, B/sub ext/ ≅ 0.35 T, ≅ 7 /times/ 10 20 m/sup /minus/3/ and T/sub e/ + T/sub i/ ≅ 400 eV. An initial 3-mtorr D 2 pressure is introduced by either static or puff fill. Asymmetric fields from auxiliary end coils (used for non-tearing formation) provide the accelerating force on the FRC, thereby eliminating the need for a conical θ-pinch coil. An important feature is the abrupt 44% decrease in the flux-conserving wall radius at the transition between the θ-pinch and translation region, similar to that in the compressor. In this paper we review a variety of issues addressed by the recent translation experiments: translation dynamics; translation through a modulated magnetic field; stabilization of the n = 2 rotational instability by weak helical quadrupole fields; and confinement properties. Results from internal magnetic field measurements in translating FRCs may be found in a companion paper. 10 refs., 5 figs

  20. Translational profiling of B cells infected with the Epstein-Barr virus reveals 5' leader ribosome recruitment through upstream open reading frames.

    Science.gov (United States)

    Bencun, Maja; Klinke, Olaf; Hotz-Wagenblatt, Agnes; Klaus, Severina; Tsai, Ming-Han; Poirey, Remy; Delecluse, Henri-Jacques

    2018-04-06

    The Epstein-Barr virus (EBV) genome encodes several hundred transcripts. We have used ribosome profiling to characterize viral translation in infected cells and map new translation initiation sites. We show here that EBV transcripts are translated with highly variable efficiency, owing to variable transcription and translation rates, variable ribosome recruitment to the leader region and coverage by monosomes versus polysomes. Some transcripts were hardly translated, others mainly carried monosomes, showed ribosome accumulation in leader regions and most likely represent non-coding RNAs. A similar process was visible for a subset of lytic genes including the key transactivators BZLF1 and BRLF1 in cells infected with weakly replicating EBV strains. This suggests that ribosome trapping, particularly in the leader region, represents a new checkpoint for the repression of lytic replication. We could identify 25 upstream open reading frames (uORFs) located upstream of coding transcripts that displayed 5' leader ribosome trapping, six of which were located in the leader region shared by many latent transcripts. These uORFs repressed viral translation and are likely to play an important role in the regulation of EBV translation.

  1. Writing under cover: Cristina Campo as translator of John Donne

    Directory of Open Access Journals (Sweden)

    Maria Panarello

    2009-12-01

    Full Text Available The study of Cristina Campo’s translations offers a precious contribution to those of us who are attempting to investigate the unknown area that lies in the shadowy zone between the source text and its translation. Vittoria Guerrini, a rather solitary and reticent figure in 20th-century Italian literature, wrote under several pen names, of which her favourite was Cristina Campo, the masque she chose for her beautiful and intense translations of a small but significant collection of poems by John Donne. This paper aims at exploring Cristina Campo’s attitude towards translation and the unique relationship she established with the poets she translated. John Donne’s translations reflect a singular solidarity displaying points of affinity between two extremely complex personalities. The dialogic rapport abolishes difference in space and time, as well as difference in language, as author and translator testify the same supreme tension towards beauty, truth and perfection. Translation in this perspective is a sacred gesture of mediation.

  2. Plant Translation Factors and Virus Resistance

    Directory of Open Access Journals (Sweden)

    Hélène Sanfaçon

    2015-06-01

    Full Text Available Plant viruses recruit cellular translation factors not only to translate their viral RNAs but also to regulate their replication and potentiate their local and systemic movement. Because of the virus dependence on cellular translation factors, it is perhaps not surprising that many natural plant recessive resistance genes have been mapped to mutations of translation initiation factors eIF4E and eIF4G or their isoforms, eIFiso4E and eIFiso4G. The partial functional redundancy of these isoforms allows specific mutation or knock-down of one isoform to provide virus resistance without hindering the general health of the plant. New possible targets for antiviral strategies have also been identified following the characterization of other plant translation factors (eIF4A-like helicases, eIF3, eEF1A and eEF1B that specifically interact with viral RNAs and proteins and regulate various aspects of the infection cycle. Emerging evidence that translation repression operates as an alternative antiviral RNA silencing mechanism is also discussed. Understanding the mechanisms that control the development of natural viral resistance and the emergence of virulent isolates in response to these plant defense responses will provide the basis for the selection of new sources of resistance and for the intelligent design of engineered resistance that is broad-spectrum and durable.

  3. Towards Qualifiable Code Generation from a Clocked Synchronous Subset of Modelica

    Directory of Open Access Journals (Sweden)

    Bernhard Thiele

    2015-01-01

    Full Text Available So far no qualifiable automatic code generators (ACGs are available for Modelica. Hence, digital control applications can be modeled and simulated in Modelica, but require tedious additional efforts (e.g., manual reprogramming to produce qualifiable target system production code. In order to more fully leverage the potential of a model-based development (MBD process in Modelica, a qualifiable automatic code generator is needed. Typical Modelica code generation is a fairly complex process which imposes a huge development burden to any efforts of tool qualification. This work aims at mapping a Modelica subset for digital control function development to a well-understood synchronous data-flow kernel language. This kernel language allows to resort to established compilation techniques for data-flow languages which are understood enough to be accepted by certification authorities. The mapping is established by providing a translational semantics from the Modelica subset to the synchronous data-flow kernel language. However, this translation turned out to be more intricate than initially expected and has given rise to several interesting issues that require suitable design decisions regarding the mapping and the language subset.

  4. Use of WIMS-E lattice code for prediction of the transuranic source term for spent fuel dose estimation

    International Nuclear Information System (INIS)

    Schwinkendorf, K.N.

    1996-01-01

    A recent source term analysis has shown a discrepancy between ORIGEN2 transuranic isotopic production estimates and those produced with the WIMS-E lattice physics code. Excellent agreement between relevant experimental measurements and WIMS-E was shown, thus exposing an error in the cross section library used by ORIGEN2

  5. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  6. The key to technical translation, v.1 concept specification

    CERN Document Server

    Hann, Michael

    1992-01-01

    This handbook for German/English/German technical translators at all levels from student to professional covers the root terminologies of the spectrum of scientific and engineering fields. The work is designed to give technical translators direct insight into the main error sources occurring in their profession, especially those resulting from a poor understanding of the subject matter and the usage of particular terms to designate different concepts in different branches of technology. The style is easy to read and suitable for nonnative English speakers and translators with no engineering ex

  7. The key to technical translation, v.2 terminology/lexicography

    CERN Document Server

    Hann, Michael

    1992-01-01

    This handbook for German/English/German technical translators at all levels from student to professional covers the root terminologies of the spectrum of scientific and engineering fields. The work is designed to give technical translators direct insight into the main error sources occurring in their profession, especially those resulting from a poor understanding of the subject matter and the usage of particular terms to designate different concepts in different branches of technology. The style is easy to read and suitable for nonnative English speakers and translators with no engineering ex

  8. Use of CITATION code for flux calculation in neutron activation analysis with voluminous sample using an Am-Be source

    International Nuclear Information System (INIS)

    Khelifi, R.; Idiri, Z.; Bode, P.

    2002-01-01

    The CITATION code based on neutron diffusion theory was used for flux calculations inside voluminous samples in prompt gamma activation analysis with an isotopic neutron source (Am-Be). The code uses specific parameters related to the energy spectrum source and irradiation system materials (shielding, reflector). The flux distribution (thermal and fast) was calculated in the three-dimensional geometry for the system: air, polyethylene and water cuboidal sample (50x50x50 cm). Thermal flux was calculated in a series of points inside the sample. The results agreed reasonably well with observed values. The maximum thermal flux was observed at a distance of 3.2 cm while CITATION gave 3.7 cm. Beyond a depth of 7.2 cm, the thermal flux to fast flux ratio increases up to twice and allows us to optimise the detection system position in the scope of in-situ PGAA

  9. Translators vs pharmacists as successful interlingual knowledge mediators?

    DEFF Research Database (Denmark)

    Jensen, Matilde Nisbeth

    a contrastive source text/target text linguistic framework focussing on elements such as the use of nominalization, compounds nouns, medical terminology and other formal register. Findings showed significant differences between the two translator groups. My findings revealed that the nature of the translator......Patient Information Leaflets (PILs) were introduced in the EU as mandatory texts accompanying all medication informing users about dosage, side effects, etc. in order to foster informed decision-making and patient empowerment. By its nature, the PIL genre is complex aiming at instructing lay people...... about complex medical content, i.e. mediation of specialized medical knowledge across a knowledge asymmetry. Within the EU, this intralingual translation is further complicated by an interlingual dimension as PILs must be translated from English into all other 23 EU languages. Legally, PILs must...

  10. Mapping Translation Technology Research in Translation Studies

    DEFF Research Database (Denmark)

    Schjoldager, Anne; Christensen, Tina Paulsen; Flanagan, Marian

    2017-01-01

    /Schjoldager 2010, 2011; Christensen 2011). Unfortunately, the increasing professional use of translation technology has not been mirrored within translation studies (TS) by a similar increase in research projects on translation technology (Munday 2009: 15; O’Hagan 2013; Doherty 2016: 952). The current thematic...... section aims to improve this situation by presenting new and innovative research papers that reflect on recent technological advances and their impact on the translation profession and translators from a diversity of perspectives and using a variety of methods. In Section 2, we present translation...... technology research as a subdiscipline of TS, and we define and discuss some basic concepts and models of the field that we use in the rest of the paper. Based on a small-scale study of papers published in TS journals between 2006 and 2016, Section 3 attempts to map relevant developments of translation...

  11. A nutrient-driven tRNA modification alters translational fidelity and genome-wide protein coding across an animal genus.

    Science.gov (United States)

    Zaborske, John M; DuMont, Vanessa L Bauer; Wallace, Edward W J; Pan, Tao; Aquadro, Charles F; Drummond, D Allan

    2014-12-01

    Natural selection favors efficient expression of encoded proteins, but the causes, mechanisms, and fitness consequences of evolved coding changes remain an area of aggressive inquiry. We report a large-scale reversal in the relative translational accuracy of codons across 12 fly species in the Drosophila/Sophophora genus. Because the reversal involves pairs of codons that are read by the same genomically encoded tRNAs, we hypothesize, and show by direct measurement, that a tRNA anticodon modification from guanosine to queuosine has coevolved with these genomic changes. Queuosine modification is present in most organisms but its function remains unclear. Modification levels vary across developmental stages in D. melanogaster, and, consistent with a causal effect, genes maximally expressed at each stage display selection for codons that are most accurate given stage-specific queuosine modification levels. In a kinetic model, the known increased affinity of queuosine-modified tRNA for ribosomes increases the accuracy of cognate codons while reducing the accuracy of near-cognate codons. Levels of queuosine modification in D. melanogaster reflect bioavailability of the precursor queuine, which eukaryotes scavenge from the tRNAs of bacteria and absorb in the gut. These results reveal a strikingly direct mechanism by which recoding of entire genomes results from changes in utilization of a nutrient.

  12. A nutrient-driven tRNA modification alters translational fidelity and genome-wide protein coding across an animal genus.

    Directory of Open Access Journals (Sweden)

    John M Zaborske

    2014-12-01

    Full Text Available Natural selection favors efficient expression of encoded proteins, but the causes, mechanisms, and fitness consequences of evolved coding changes remain an area of aggressive inquiry. We report a large-scale reversal in the relative translational accuracy of codons across 12 fly species in the Drosophila/Sophophora genus. Because the reversal involves pairs of codons that are read by the same genomically encoded tRNAs, we hypothesize, and show by direct measurement, that a tRNA anticodon modification from guanosine to queuosine has coevolved with these genomic changes. Queuosine modification is present in most organisms but its function remains unclear. Modification levels vary across developmental stages in D. melanogaster, and, consistent with a causal effect, genes maximally expressed at each stage display selection for codons that are most accurate given stage-specific queuosine modification levels. In a kinetic model, the known increased affinity of queuosine-modified tRNA for ribosomes increases the accuracy of cognate codons while reducing the accuracy of near-cognate codons. Levels of queuosine modification in D. melanogaster reflect bioavailability of the precursor queuine, which eukaryotes scavenge from the tRNAs of bacteria and absorb in the gut. These results reveal a strikingly direct mechanism by which recoding of entire genomes results from changes in utilization of a nutrient.

  13. Translation of vph mRNA in Streptomyces lividans and Escherichia coli after removal of the 5' untranslated leader.

    Science.gov (United States)

    Wu, C J; Janssen, G R

    1996-10-01

    The Streptomyces vinaceus viomycin phosphotransferase (vph) mRNA contains an untranslated leader with a conventional Shine-Dalgarno homology. The vph leader was removed by ligation of the vph coding sequence to the transcriptional start site of a Streptomyces or an Escherichia coli promoter, such that transcription would initiate at the first position of the vph start codon. Analysis of mRNA demonstrated that transcription initiated primarily at the A of the vph AUG translational start codon in both Streptomyces lividans and E. coli; cells expressing the unleadered vph mRNA were resistant to viomycin indicating that the Shine-Dalgarno sequence, or other features contained within the leader, was not necessary for vph translation. Addition of four nucleotides (5'-AUGC-3') onto the 5' end of the unleadered vph mRNA resulted in translation initiation from the vph start codon and the AUG triplet contained within the added sequence. Translational fusions of vph sequence to a Tn5 neo reporter gene indicated that the first 16 codons of vph coding sequence were sufficient to specify the translational start site and reading frame for expression of neomycin resistance in both E. coli and S. lividans.

  14. Gravity inversion code

    International Nuclear Information System (INIS)

    Burkhard, N.R.

    1979-01-01

    The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables

  15. The translational repressor Crc controls the Pseudomonas putida benzoate and alkane catabolic pathways using a multi-tier regulation strategy.

    Science.gov (United States)

    Hernández-Arranz, Sofía; Moreno, Renata; Rojo, Fernando

    2013-01-01

    Metabolically versatile bacteria usually perceive aromatic compounds and hydrocarbons as non-preferred carbon sources, and their assimilation is inhibited if more preferable substrates are available. This is achieved via catabolite repression. In Pseudomonas putida, the expression of the genes allowing the assimilation of benzoate and n-alkanes is strongly inhibited by catabolite repression, a process controlled by the translational repressor Crc. Crc binds to and inhibits the translation of benR and alkS mRNAs, which encode the transcriptional activators that induce the expression of the benzoate and alkane degradation genes respectively. However, sequences similar to those recognized by Crc in benR and alkS mRNAs exist as well in the translation initiation regions of the mRNA of several structural genes of the benzoate and alkane pathways, which suggests that Crc may also regulate their translation. The present results show that some of these sites are functional, and that Crc inhibits the induction of both pathways by limiting not only the translation of their transcriptional activators, but also that of genes coding for the first enzyme in each pathway. Crc may also inhibit the translation of a gene involved in benzoate uptake. This multi-tier approach probably ensures the rapid regulation of pathway genes, minimizing the assimilation of non-preferred substrates when better options are available. A survey of possible Crc sites in the mRNAs of genes associated with other catabolic pathways suggested that targeting substrate uptake, pathway induction and/or pathway enzymes may be a common strategy to control the assimilation of non-preferred compounds. © 2012 Society for Applied Microbiology and Blackwell Publishing Ltd.

  16. Dual-Coding Theory and Connectionist Lexical Selection

    OpenAIRE

    Wang, Ye-Yi

    1994-01-01

    We introduce the bilingual dual-coding theory as a model for bilingual mental representation. Based on this model, lexical selection neural networks are implemented for a connectionist transfer project in machine translation. This lexical selection approach has two advantages. First, it is learnable. Little human effort on knowledge engineering is required. Secondly, it is psycholinguistically well-founded.

  17. International Meeting on Languages, Applied Linguistics and Translation

    OpenAIRE

    Guerra, Luis

    2012-01-01

    This meeting aims at providing an overview of the current theory and practice, exploring new directions and emerging trends, sharing good practice, and exchanging information regarding foreign languages, applied linguistics and translation. The meeting is an excellent opportunity for the presentation of current or previous language learning and translation projects funded by the European Commission or by other sources. Proposals are invited for one or more of the following topics, in any of t...

  18. An Assessment of Some Design Constraints on Heat Production of a 3D Conceptual EGS Model Using an Open-Source Geothermal Reservoir Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Yidong Xia; Mitch Plummer; Robert Podgorney; Ahmad Ghassemi

    2016-02-01

    Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation angle for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.

  19. Use of monolingual and comparable corpora in the classroom to translate adverbial connectors

    Directory of Open Access Journals (Sweden)

    Beatriz Sánchez Cárdenas

    2016-04-01

    This research explored the reasons why certain adverbial discourse connectors, apparently easy to translate, are a source of translation problems that cannot be easily resolved with a bilingual dictionary. Moreover, this study analyzed the use of parallel corpora in the translation classroom and how it can increase the quality of text production. For this purpose, we compared student translations before and after receiving training on the use of corpus analysis tools.

  20. A discriminative syntactic model for source permutation via tree transduction

    NARCIS (Netherlands)

    Khalilov, M.; Sima'an, K.; Wu, D.

    2010-01-01

    A major challenge in statistical machine translation is mitigating the word order differences between source and target strings. While reordering and lexical translation choices are often conducted in tandem, source string permutation prior to translation is attractive for studying reordering using

  1. NeuPAT: an intranet database supporting translational research in neuroblastic tumors.

    Science.gov (United States)

    Villamón, Eva; Piqueras, Marta; Meseguer, Javier; Blanquer, Ignacio; Berbegall, Ana P; Tadeo, Irene; Hernández, Vicente; Navarro, Samuel; Noguera, Rosa

    2013-03-01

    Translational research in oncology is directed mainly towards establishing a better risk stratification and searching for appropriate therapeutic targets. This research generates a tremendous amount of complex clinical and biological data needing speedy and effective management. The authors describe the design, implementation and early experiences of a computer-aided system for the integration and management of data for neuroblastoma patients. NeuPAT facilitates clinical and translational research, minimizes the workload in consolidating the information, reduces errors and increases correlation of data through extensive coding. This design can also be applied to other tumor types. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. From Polarity to Plurality in Translation Scholarship

    Directory of Open Access Journals (Sweden)

    Abdolla Karimzadeh

    2012-09-01

    Full Text Available Review of the literature in translation studies shows that translation scholarship can be discussed in 3 Macro-levels including 1 Corpus-based studies, 2 Protocol-based studies, and 3 Systems- based studies. Researchers in the corpus-based studies test the hypothesis about the universals of translation. They also try to identify translation norms and regular linguistic patterns. This scholarship aims at showing that the language of translation is different from that of non-translation. The other purpose is to identify the techniques and strategies adopted by the translators. In protocol –based studies, the researchers study the mental activities and the individual behaviors of the translators while translating. They aim to describe the behavior of professional translators (versus translator trainees during the process of translation in a bid to identify how they chunk the source text (unit of translation and to describe how the translation trainees develop their translation competence. These studies are longitudinal for the reason that they aim to investigate the change of intended behaviors in the subjects of the study. Like corpus-based studies, they are experimental and data for analysis are collected by various methods including the translators’ verbal report, keystroke logging, eye tracking, and so on. Recently, in a method called “triangulation”, they combine the above-mentioned methods of data collection to test their hypotheses on a stronger experimental basis. To collect the data, they also employ the methods used in neurology (for example the technology of Electroencephalogram in order to obtain information on the physiological processes in the brains of the translators while translating. And finally in the systems-based studies, the researchers analyze more extended systems of production, distribution, and consumption of translations and their impacts on the target culture in a specific socio-cultural context. Differentiating

  3. Authenticity and Imitation in Translating Exposition: A Corpus-Based Study

    Science.gov (United States)

    Elmgrab, Ramadan Ahmed

    2015-01-01

    Many Western scholars such as Dryden show little interest in imitations, and express their preference for translations, i.e. paraphrases that are faithful to the sense of the source text. However, they consider imitations as a viable category of translation. It is the degree of freedom, or departure from the original, that differentiates a…

  4. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, D.E. Jr.; Pleasant, J.C.; Killough, G.G.

    1977-11-01

    A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/..mu..Ci-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult.

  5. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ

    International Nuclear Information System (INIS)

    Dunning, D.E. Jr.; Pleasant, J.C.; Killough, G.G.

    1977-11-01

    A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/μCi-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult

  6. Performance evaluation based on data from code reviews

    OpenAIRE

    Andrej, Sekáč

    2016-01-01

    Context. Modern code review tools such as Gerrit have made available great amounts of code review data from different open source projects as well as other commercial projects. Code reviews are used to keep the quality of produced source code under control but the stored data could also be used for evaluation of the software development process. Objectives. This thesis uses machine learning methods for an approximation of review expert’s performance evaluation function. Due to limitations in ...

  7. FOREIGNIZATION AND DOMESTICATION STRATEGIES IN CULTURAL TERM TRANSLATION OF TOURISM BROCHURES

    Directory of Open Access Journals (Sweden)

    Choirul Fuadi

    2016-09-01

    Full Text Available In translating brochure, a translator has to make a decision on the basis of the message and purpose. The translator is faced by two strategies of translation – foreignization and domestication. The purpose of the study is to examine how the interrelationship between cultural term translation and foreignization or domestication strategy in the cultural term translation of tourism brochure from Indonesian into English. This study used qualitative descriptive with discourse analysis strategy. The note-taking technique is used to identify and classify the data. The objects of the study are tourism brochures from Province of Special Region of Yogyakarta and Central Java in 2015. The findings show that the translation strategies used depend on the translation process. When the cultural terms are familiar, translator tends to use domestication strategy and consider the target text. Translator chooses domestication strategy because try to make tourist understand the text and produce communicative and natural translation. On the other hand, when cultural terms are foreign, translator using foreignization strategy and consider source text. Using foreignization strategy, translator tends to introduce traditional cultural term.Keywords: discourse analysis, foreignization, domestication, cultural category, tourism brochure

  8. Recycling source terms for edge plasma fluid models and impact on convergence behaviour of the BRAAMS 'B2' code

    International Nuclear Information System (INIS)

    Maddison, G.P.; Reiter, D.

    1994-02-01

    Predictive simulations of tokamak edge plasmas require the most authentic description of neutral particle recycling sources, not merely the most expedient numerically. Employing a prototypical ITER divertor arrangement under conditions of high recycling, trial calculations with the 'B2' steady-state edge plasma transport code, plus varying approximations or recycling, reveal marked sensitivity of both results and its convergence behaviour to details of sources incorporated. Comprehensive EIRENE Monte Carlo resolution of recycling is implemented by full and so-called 'shot' intermediate cycles between the plasma fluid and statistical neutral particle models. As generally for coupled differencing and stochastic procedures, though, overall convergence properties become more difficult to assess. A pragmatic criterion for the 'B2'/EIRENE code system is proposed to determine its success, proceeding from a stricter condition previously identified for one particular analytic approximation of recycling in 'B2'. Certain procedures are also inferred potentially to improve their convergence further. (orig.)

  9. L’Hôpital's Analyse des infiniments petits an annotated translation with source material by Johann Bernoulli

    CERN Document Server

    Bradley, Robert E; Sandifer, C Edward

    2015-01-01

    This monograph is an annotated translation of what is considered to be the world’s first calculus textbook, originally published in French in 1696. That anonymously published textbook on differential calculus was based on lectures given to the Marquis de l’Hôpital in 1691-2 by the great Swiss mathematician, Johann Bernoulli. In the 1920s, a copy of Bernoulli’s lecture notes was discovered in a library in Basel, which presented the opportunity to compare Bernoulli’s notes, in Latin, to l’Hôpital’s text in French. The similarities are remarkable, but there is also much in l’Hôpital’s book that is original and innovative. This book offers the first English translation of Bernoulli's notes, along with the first faithful English translation of l’Hôpital’s text, complete with annotations and commentary. Additionally, a significant portion of the correspondence between l’Hôpital and Bernoulli has been included, also for the fi rst time in English translation. This translation will provide ...

  10. Translation and development of the BNWL-geosphere model

    International Nuclear Information System (INIS)

    Grundfelt, B.

    1977-02-01

    The report deals with the rate of radioactivity discharge from a repository for radioactive waste in a geologic formation to the biosphere. A BASIC language computer program called GETOUT has been developed in USA. It was obtained by the Swedish project Nuclear Fuel Safety and has thereafter been translated into FORTRAN. The main extension of the code, that was made during the translation, is a model for averaging the hydrodynamic and geochemical parameters for the case of non-uniform packing of the column (e.g. considering a repository in cracked rock with crack width, crack spacing etc. in different zones). The program has been outtested on an IBM model 360/75 computer. The migration is governed by three parameters i.e. the ground water velocity, the dispersion coefficient and the nuclide retentivities. (L.B.)

  11. Application of the source term code package to obtain a specific source term for the Laguna Verde Nuclear Power Plant

    International Nuclear Information System (INIS)

    Souto, F.J.

    1991-06-01

    The main objective of the project was to use the Source Term Code Package (STCP) to obtain a specific source term for those accident sequences deemed dominant as a result of probabilistic safety analyses (PSA) for the Laguna Verde Nuclear Power Plant (CNLV). The following programme has been carried out to meet this objective: (a) implementation of the STCP, (b) acquisition of specific data for CNLV to execute the STCP, and (c) calculations of specific source terms for accident sequences at CNLV. The STCP has been implemented and validated on CDC 170/815 and CDC 180/860 main frames as well as on a Micro VAX 3800 system. In order to get a plant-specific source term, data on the CNLV including initial core inventory, burn-up, primary containment structures, and materials used for the calculations have been obtained. Because STCP does not explicitly model containment failure, dry well failure in the form of a catastrophic rupture has been assumed. One of the most significant sequences from the point of view of possible off-site risk is the loss of off-site power with failure of the diesel generators and simultaneous loss of high pressure core spray and reactor core isolation cooling systems. The probability for that event is approximately 4.5 x 10 -6 . This sequence has been analysed in detail and the release fractions of radioisotope groups are given in the full report. 18 refs, 4 figs, 3 tabs

  12. Recommendations for Adopting the International Code of Marketing of Breast-milk Substitutes Into U.S. Policy.

    Science.gov (United States)

    Soldavini, Jessica; Taillie, Lindsey Smith

    2017-08-01

    In 1981, the World Health Organization adopted the International Code of Marketing of Breast-milk Substitutes ( International Code), with subsequent resolutions adopted since then. The International Code contributes to the safe and adequate provision of nutrition for infants by protecting and promoting breastfeeding and ensuring that human milk substitutes, when necessary, are used properly through adequate information and appropriate marketing and distribution. Despite the World Health Organization recommendations for all member nations to implement the International Code in its entirety, the United States has yet to take action to translate it into any national measures. In 2012, only 22.3% of infants in the United States met the American Academy of Pediatrics recommendation of at least 6 months of exclusive breastfeeding. Countries adopting legislation reflecting the provisions of the International Code have seen increases in breastfeeding rates. This article discusses recommendations for translating the International Code into U.S. policy. Adopting legislation that implements, monitors, and enforces the International Code in its entirety has the potential to contribute to increased rates of breastfeeding in the United States, which can lead to improved health outcomes in both infants and breastfeeding mothers.

  13. Efficient accurate syntactic direct translation models: one tree at a time

    NARCIS (Netherlands)

    Hassan, H.; Sima'an, K.; Way, A.

    2011-01-01

    A challenging aspect of Statistical Machine Translation from Arabic to English lies in bringing the Arabic source morpho-syntax to bear on the lexical as well as word-order choices of the English target string. In this article, we extend the feature-rich discriminative Direct Translation Model 2

  14. Topical Review: Translating Translational Research in Behavioral Science.

    Science.gov (United States)

    Hommel, Kevin A; Modi, Avani C; Piazza-Waggoner, Carrie; Myers, James D

    2015-01-01

    To present a model of translational research for behavioral science that communicates the role of behavioral research at each phase of translation. A task force identified gaps in knowledge regarding behavioral translational research processes and made recommendations regarding advancement of knowledge. A comprehensive model of translational behavioral research was developed. This model represents T1, T2, and T3 research activities, as well as Phase 1, 2, 3, and 4 clinical trials. Clinical illustrations of translational processes are also offered as support for the model. Behavioral science has struggled with defining a translational research model that effectively articulates each stage of translation and complements biomedical research. Our model defines key activities at each phase of translation from basic discovery to dissemination/implementation. This should be a starting point for communicating the role of behavioral science in translational research and a catalyst for better integration of biomedical and behavioral research. © The Author 2015. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Finite translation surfaces with maximal number of translations

    OpenAIRE

    Schlage-Puchta, Jan-Christoph; Weitze-Schmithuesen, Gabriela

    2013-01-01

    The natural automorphism group of a translation surface is its group of translations. For finite translation surfaces of genus g > 1 the order of this group is naturally bounded in terms of g due to a Riemann-Hurwitz formula argument. In analogy with classical Hurwitz surfaces, we call surfaces which achieve the maximal bound Hurwitz translation surfaces. We study for which g there exist Hurwitz translation surfaces of genus g.

  16. Machine Translation Tools - Tools of The Translator's Trade

    DEFF Research Database (Denmark)

    Kastberg, Peter

    2012-01-01

    In this article three of the more common types of translation tools are presented, discussed and critically evaluated. The types of translation tools dealt with in this article are: Fully Automated Machine Translation (or FAMT), Human Aided Machine Translation (or HAMT) and Machine Aided Human...... Translation (or MAHT). The strengths and weaknesses of the different types of tools are discussed and evaluated by means of a number of examples. The article aims at two things: at presenting a sort of state of the art of what is commonly referred to as “machine translation” as well as at providing the reader...... with a sound basis for considering what translation tool (if any) is the most appropriate in order to meet his or her specific translation needs....

  17. Introduction to coding and information theory

    CERN Document Server

    Roman, Steven

    1997-01-01

    This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.

  18. Analysis of genetic code ambiguity arising from nematode-specific misacylated tRNAs.

    Directory of Open Access Journals (Sweden)

    Kiyofumi Hamashima

    Full Text Available The faithful translation of the genetic code requires the highly accurate aminoacylation of transfer RNAs (tRNAs. However, it has been shown that nematode-specific V-arm-containing tRNAs (nev-tRNAs are misacylated with leucine in vitro in a manner that transgresses the genetic code. nev-tRNA(Gly (CCC and nev-tRNA(Ile (UAU, which are the major nev-tRNA isotypes, could theoretically decode the glycine (GGG codon and isoleucine (AUA codon as leucine, causing GGG and AUA codon ambiguity in nematode cells. To test this hypothesis, we investigated the functionality of nev-tRNAs and their impact on the proteome of Caenorhabditis elegans. Analysis of the nucleotide sequences in the 3' end regions of the nev-tRNAs showed that they had matured correctly, with the addition of CCA, which is a crucial posttranscriptional modification required for tRNA aminoacylation. The nuclear export of nev-tRNAs was confirmed with an analysis of their subcellular localization. These results show that nev-tRNAs are processed to their mature forms like common tRNAs and are available for translation. However, a whole-cell proteome analysis found no detectable level of nev-tRNA-induced mistranslation in C. elegans cells, suggesting that the genetic code is not ambiguous, at least under normal growth conditions. Our findings indicate that the translational fidelity of the nematode genetic code is strictly maintained, contrary to our expectations, although deviant tRNAs with misacylation properties are highly conserved in the nematode genome.

  19. Translation of Syntactic Repetitions as Formal-Aesthetic Marker in Das Brot

    Directory of Open Access Journals (Sweden)

    Rosyidah

    2017-03-01

    Full Text Available Translating repetition as a formal-aesthetic marker in a literary text is a hard task and challenge for translators. The topic of this study is translation of syntactic repetition as formal-aesthetic marker in literary text. The problems examined include: (1 the syntactic repetitions in the source text and (2 the strategies to translate these repetitions carried out by the students. This is a case study with a qualitative approach which is aimed to describe the syntactic repetitions as formal aesthetic markers in the German short story Das Brot written by Wolfgang Borchert and to explain the strategies used by Indonesian students to translate the syntactic repetitions. The research data are repetitive sentences gained from the German short story and from the translated versions done by 60 students. The analysis was carried out interactively and sociosemiotically. The results show that there were repetitions at the sentence level including sentence parts, sentences and content repetition in the source text. The strategies used by the students to translate the repetitions of sentence part and sentence were exact preservation and modified preservation with reduction, implicitation and addition of extra words, avoidance with deletion, explicitation, implicitation, nominalization, and synonymy. In the meantime, content repetitions were translated using the strategy of exact preservation and preservation with modification by adding extra words and using role-based terms of address. Thus, the results lead to two new variations of modified preservation, namely preservation by adding extra words and by changing addressing terms and one new variation of avoidance that is explicitation.

  20. Classifying Coding DNA with Nucleotide Statistics

    Directory of Open Access Journals (Sweden)

    Nicolas Carels

    2009-10-01

    Full Text Available In this report, we compared the success rate of classification of coding sequences (CDS vs. introns by Codon Structure Factor (CSF and by a method that we called Universal Feature Method (UFM. UFM is based on the scoring of purine bias (Rrr and stop codon frequency. We show that the success rate of CDS/intron classification by UFM is higher than by CSF. UFM classifies ORFs as coding or non-coding through a score based on (i the stop codon distribution, (ii the product of purine probabilities in the three positions of nucleotide triplets, (iii the product of Cytosine (C, Guanine (G, and Adenine (A probabilities in the 1st, 2nd, and 3rd positions of triplets, respectively, (iv the probabilities of G in 1st and 2nd position of triplets and (v the distance of their GC3 vs. GC2 levels to the regression line of the universal correlation. More than 80% of CDSs (true positives of Homo sapiens (>250 bp, Drosophila melanogaster (>250 bp and Arabidopsis thaliana (>200 bp are successfully classified with a false positive rate lower or equal to 5%. The method releases coding sequences in their coding strand and coding frame, which allows their automatic translation into protein sequences with 95% confidence. The method is a natural consequence of the compositional bias of nucleotides in coding sequences.

  1. Efficient Coding of Information: Huffman Coding -RE ...

    Indian Academy of Sciences (India)

    to a stream of equally-likely symbols so as to recover the original stream in the event of errors. The for- ... The source-coding problem is one of finding a mapping from U to a ... probability that the random variable X takes the value x written as ...

  2. Design development of bellows for the DNB beam source

    International Nuclear Information System (INIS)

    Singh, Dhananjay Kumar; Venkata Nagaraju, M.; Joshi, Jaydeep; Patel, Hitesh; Yadav, Ashish; Pillai, Suraj; Singh, Mahendrajit; Bandyopadhyay, Mainak; Chakraborty, A.K.; Sharma, Dheeraj

    2017-01-01

    Establishing a procedure and mechanism for alignment of Ion beams in Neutral Beam (NB) sources for ITER like systems are complex due to large traversal distances (∼21 m) and restricted use of flexible elements into the system. For the beam source of DNB, movement requirements for beam alignment are the combination of tilting (±9mrad), rotation (±9mrad) and translation (±25mm). The present work describes the design development of a system composed of three single ply ‘Gimbal’ type bellow system, placed in series, in L-shaped hydraulic lines (size DN50, DN20 and DN15). The paper shall detail out the generation of initial requirements, transformation of movements at bellow locations, selection of bellows/combination of bellows, minimizing the induced movements by optimization of bellows location, estimation of movements through CEASAR II and the design compliance with respect to EJMA code

  3. Active Fault Near-Source Zones Within and Bordering the State of California for the 1997 Uniform Building Code

    Science.gov (United States)

    Petersen, M.D.; Toppozada, Tousson R.; Cao, T.; Cramer, C.H.; Reichle, M.S.; Bryant, W.A.

    2000-01-01

    The fault sources in the Project 97 probabilistic seismic hazard maps for the state of California were used to construct maps for defining near-source seismic coefficients, Na and Nv, incorporated in the 1997 Uniform Building Code (ICBO 1997). The near-source factors are based on the distance from a known active fault that is classified as either Type A or Type B. To determine the near-source factor, four pieces of geologic information are required: (1) recognizing a fault and determining whether or not the fault has been active during the Holocene, (2) identifying the location of the fault at or beneath the ground surface, (3) estimating the slip rate of the fault, and (4) estimating the maximum earthquake magnitude for each fault segment. This paper describes the information used to produce the fault classifications and distances.

  4. ADLIB: A simple database framework for beamline codes

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1993-01-01

    There are many well developed codes available for beamline design and analysis. A significant fraction of each of these codes is devoted to processing its own unique input language for describing the problem. None of these large, complex, and powerful codes does everything. Adding a new bit of specialized physics can be a difficult task whose successful completion makes the code even larger and more complex. This paper describes an attempt to move in the opposite direction, toward a family of small, simple, single purpose physics and utility modules, linked by an open, portable, public domain database framework. These small specialized physics codes begin with the beamline parameters already loaded in the database, and accessible via the handful of subroutines that constitute ADLIB. Such codes are easier to write, and inherently organized in a manner suitable for incorporation in model based control system algorithms. Examples include programs for analyzing beamline misalignment sensitivities, for simulating and fitting beam steering data, and for translating among MARYLIE, TRANSPORT, and TRACE3D formats

  5. Translation Ambiguity but Not Word Class Predicts Translation Performance

    Science.gov (United States)

    Prior, Anat; Kroll, Judith F.; Macwhinney, Brian

    2013-01-01

    We investigated the influence of word class and translation ambiguity on cross-linguistic representation and processing. Bilingual speakers of English and Spanish performed translation production and translation recognition tasks on nouns and verbs in both languages. Words either had a single translation or more than one translation. Translation…

  6. Examining English-German Translation Ambiguity Using Primed Translation Recognition

    Science.gov (United States)

    Eddington, Chelsea M.; Tokowicz, Natasha

    2013-01-01

    Many words have more than one translation across languages. Such "translation-ambiguous" words are translated more slowly and less accurately than their unambiguous counterparts. We examine the extent to which word context and translation dominance influence the processing of translation-ambiguous words. We further examine how these factors…

  7. A GRAMMATICAL ADJUSTMENT ANALYSIS OF STATISTICAL MACHINE TRANSLATION METHOD USED BY GOOGLE TRANSLATE COMPARED TO HUMAN TRANSLATION IN TRANSLATING ENGLISH TEXT TO INDONESIAN

    Directory of Open Access Journals (Sweden)

    Eko Pujianto

    2017-04-01

    Full Text Available Google translate is a program which provides fast, free and effortless translating service. This service uses a unique method to translate. The system is called ―Statistical Machine Translation‖, the newest method in automatic translation. Machine translation (MT is an area of many kinds of different subjects of study and technique from linguistics, computers science, artificial intelligent (AI, translation theory, and statistics. SMT works by using statistical methods and mathematics to process the training data. The training data is corpus-based. It is a compilation of sentences and words of the languages (SL and TL from translation done by human. By using this method, Google let their machine discovers the rules for themselves. They do this by analyzing millions of documents that have already been translated by human translators and then generate the result based on the corpus/training data. However, questions arise when the results of the automatic translation prove to be unreliable in some extent. This paper questions the dependability of Google translate in comparison with grammatical adjustment that naturally characterizes human translators' specific advantage. The attempt is manifested through the analysis of the TL of some texts translated by the SMT. It is expected that by using the sample of TL produced by SMT we can learn the potential flaws of the translation. If such exists, the partial of more substantial undependability of SMT may open more windows to the debates of whether this service may suffice the users‘ need.

  8. “De interpretatione recta...”: Early Modern Theories of Translation

    Directory of Open Access Journals (Sweden)

    Zaharia Oana-Alis

    2014-12-01

    Full Text Available Translation has been essential to the development of languages and cultures throughout the centuries, particularly in the early modern period when it became a cornerstone of the process of transition from Latin to vernacular productions, in such countries as France, Italy, England and Spain. This process was accompanied by a growing interest in defining the rules and features of the practice of translation. The present article aims to examine the principles that underlay the highly intertextual early modern translation theory by considering its classical sources and development. It focuses on subjects that were constantly reiterated in any discussion about translation: the debate concerning the best methods of translation, the sense-for-sense/ word-for-word dichotomy - a topos that can be traced to the discourse on translation initiated by Cicero and Horace and was further developed by the Church fathers, notably St. Jerome, and eventually inherited by both medieval and Renaissance translators. Furthermore, it looks at the differences and continuities that characterise the medieval and Renaissance discourses on translation with a focus on the transition from the medieval, free manner of translation to the humanist, philological one.

  9. Recent advances in coding theory for near error-free communications

    Science.gov (United States)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  10. Generic programming for deterministic neutron transport codes

    International Nuclear Information System (INIS)

    Plagne, L.; Poncot, A.

    2005-01-01

    This paper discusses the implementation of neutron transport codes via generic programming techniques. Two different Boltzmann equation approximations have been implemented, namely the Sn and SPn methods. This implementation experiment shows that generic programming allows us to improve maintainability and readability of source codes with no performance penalties compared to classical approaches. In the present implementation, matrices and vectors as well as linear algebra algorithms are treated separately from the rest of source code and gathered in a tool library called 'Generic Linear Algebra Solver System' (GLASS). Such a code architecture, based on a linear algebra library, allows us to separate the three different scientific fields involved in transport codes design: numerical analysis, reactor physics and computer science. Our library handles matrices with optional storage policies and thus applies both to Sn code, where the matrix elements are computed on the fly, and to SPn code where stored matrices are used. Thus, using GLASS allows us to share a large fraction of source code between Sn and SPn implementations. Moreover, the GLASS high level of abstraction allows the writing of numerical algorithms in a form which is very close to their textbook descriptions. Hence the GLASS algorithms collection, disconnected from computer science considerations (e.g. storage policy), is very easy to read, to maintain and to extend. (authors)

  11. Compliments in Audiovisual Translation – issues in character identity

    Directory of Open Access Journals (Sweden)

    Isabel Fernandes Silva

    2011-12-01

    Full Text Available Over the last decades, audiovisual translation has gained increased significance in Translation Studies as well as an interdisciplinary subject within other fields (media, cinema studies etc. Although many articles have been published on communicative aspects of translation such as politeness, only recently have scholars taken an interest in the translation of compliments. This study will focus on both these areas from a multimodal and pragmatic perspective, emphasizing the links between these fields and how this multidisciplinary approach will evidence the polysemiotic nature of the translation process. In Audiovisual Translation both text and image are at play, therefore, the translation of speech produced by the characters may either omit (because it is provided by visualgestual signs or it may emphasize information. A selection was made of the compliments present in the film What Women Want, our focus being on subtitles which did not successfully convey the compliment expressed in the source text, as well as analyze the reasons for this, namely difference in register, Culture Specific Items and repetitions. These differences lead to a different portrayal/identity/perception of the main character in the English version (original soundtrack and subtitled versions in Portuguese and Italian.

  12. Transliteration normalization for Information Extraction and Machine Translation

    Directory of Open Access Journals (Sweden)

    Yuval Marton

    2014-12-01

    Full Text Available Foreign name transliterations typically include multiple spelling variants. These variants cause data sparseness and inconsistency problems, increase the Out-of-Vocabulary (OOV rate, and present challenges for Machine Translation, Information Extraction and other natural language processing (NLP tasks. This work aims to identify and cluster name spelling variants using a Statistical Machine Translation method: word alignment. The variants are identified by being aligned to the same “pivot” name in another language (the source-language in Machine Translation settings. Based on word-to-word translation and transliteration probabilities, as well as the string edit distance metric, names with similar spellings in the target language are clustered and then normalized to a canonical form. With this approach, tens of thousands of high-precision name transliteration spelling variants are extracted from sentence-aligned bilingual corpora in Arabic and English (in both languages. When these normalized name spelling variants are applied to Information Extraction tasks, improvements over strong baseline systems are observed. When applied to Machine Translation tasks, a large improvement potential is shown.

  13. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  14. Translation-coupling systems

    Science.gov (United States)

    Pfleger, Brian; Mendez-Perez, Daniel

    2013-11-05

    Disclosed are systems and methods for coupling translation of a target gene to a detectable response gene. A version of the invention includes a translation-coupling cassette. The translation-coupling cassette includes a target gene, a response gene, a response-gene translation control element, and a secondary structure-forming sequence that reversibly forms a secondary structure masking the response-gene translation control element. Masking of the response-gene translation control element inhibits translation of the response gene. Full translation of the target gene results in unfolding of the secondary structure and consequent translation of the response gene. Translation of the target gene is determined by detecting presence of the response-gene protein product. The invention further includes RNA transcripts of the translation-coupling cassettes, vectors comprising the translation-coupling cassettes, hosts comprising the translation-coupling cassettes, methods of using the translation-coupling cassettes, and gene products produced with the translation-coupling cassettes.

  15. The Role of Semantics in Translation Recognition: Effects of Number of Translations, Dominance of Translations and Semantic Relatedness of Multiple Translations

    Science.gov (United States)

    Laxen, Jannika; Lavaur, Jean-Marc

    2010-01-01

    This study aims to examine the influence of multiple translations of a word on bilingual processing in three translation recognition experiments during which French-English bilinguals had to decide whether two words were translations of each other or not. In the first experiment, words with only one translation were recognized as translations…

  16. Practical Evaluation of Stateful NAT64/DNS64 Translation

    Directory of Open Access Journals (Sweden)

    SKOBERNE, N.

    2011-08-01

    Full Text Available It is often suggested that the approach to IPv6 transition is dual-stack deployment; however, it is not feasible in certain environments. As Network Address Translation -- Protocol Translation (NAT-PT has been deprecated, stateful NAT64 and DNS64 RFCs have been published, supporting only IPv6-to-IPv4 translation scenario. Now the question of usability in the real world arises. In this paper, we systematically test a number of widely used application-layer network protocols to find out how well they traverse Ecdysis, the first open source stateful NAT64 and DNS64 implementation. We practically evaluated 18 popular protocols, among them HTTP, RDP, MSNP, and IMAP, and discuss the shortcomings of such translations that might not be apparent at first sight.

  17. Living Up to the Code's Exhortations? Social Workers' Political Knowledge Sources, Expectations, and Behaviors.

    Science.gov (United States)

    Felderhoff, Brandi Jean; Hoefer, Richard; Watson, Larry Dan

    2016-01-01

    The National Association of Social Workers' (NASW's) Code of Ethics urges social workers to engage in political action. However, little recent research has been conducted to examine whether social workers support this admonition and the extent to which they actually engage in politics. The authors gathered data from a survey of social workers in Austin, Texas, to address three questions. First, because keeping informed about government and political news is an important basis for action, the authors asked what sources of knowledge social workers use. Second, they asked what the respondents believe are appropriate political behaviors for other social workers and NASW. Third, they asked for self-reports regarding respondents' own political behaviors. Results indicate that social workers use the Internet and traditional media services to stay informed; expect other social workers and NASW to be active; and are, overall, more active than the general public in many types of political activities. The comparisons made between expectations for others and their own behaviors are interesting in their complex outcomes. Social workers should strive for higher levels of adherence to the code's urgings on political activity. Implications for future work are discussed.

  18. Language translation challenges with Arabic speakers participating in qualitative research studies.

    Science.gov (United States)

    Al-Amer, Rasmieh; Ramjan, Lucie; Glew, Paul; Darwish, Maram; Salamonson, Yenna

    2016-02-01

    This paper discusses how a research team negotiated the challenges of language differences in a qualitative study that involved two languages. The lead researcher shared the participants' language and culture, and the interviews were conducted using the Arabic language as a source language, which was then translated and disseminated in the English language (target language). The challenges in relation to translation in cross-cultural research were highlighted from a perspective of establishing meaning as a vital issue in qualitative research. The paper draws on insights gained from a study undertaken among Arabic-speaking participants involving the use of in-depth semi-structured interviews. The study was undertaken using a purposive sample of 15 participants with Type 2 Diabetes Mellitus and co-existing depression and explored their perception of self-care management behaviours. Data analysis was performed in two phases. The first phase entailed translation and transcription of the data, and the second phase entailed thematic analysis of the data to develop categories and themes. In this paper there is discussion on the translation process and its inherent challenges. As translation is an interpretive process and not merely a direct message transfer from a source language to a target language, translators need to systematically and accurately capture the full meaning of the spoken language. This discussion paper highlights difficulties in the translation process, specifically in managing data in relation to metaphors, medical terminology and connotation of the text, and importantly, preserving the meaning between the original and translated data. Recommendations for future qualitative studies involving interviews with non-English speaking participants are outlined, which may assist researchers maintain the integrity of the data throughout the translation process. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Models of kulture in Nabokov's memoirs and translation memoirs in Serbian and Croatian language

    Directory of Open Access Journals (Sweden)

    Razdobudko-Čović Larisa I.

    2012-01-01

    Full Text Available The paper presents an analysis of two Serbian translations of V. Nabokov's memoirs, that is the translation of the novel 'Drugie berega' ('The Other Shores' published in Russian as an authorized translation from the original English version 'Conclusive Evidence', and the translation of Nabokov's authorized translation from Russian to English entitled 'Speak, Memory'. Creolization of three models of culture in translation from the two originals - Russian and English - Is presented. Specific features of the two Serbian translations are analyzed, and a survey of characteristic mistakes caused by some specific characteristics of the source language is given. Also, Nabokov's very original approach to translation which is quite interpretative is highlighted.

  20. Whether and Where to Code in the Wireless Relay Channel

    DEFF Research Database (Denmark)

    Shi, Xiaomeng; Médard, Muriel; Roetter, Daniel Enrique Lucani

    2013-01-01

    The throughput benefits of random linear network codes have been studied extensively for wirelined and wireless erasure networks. It is often assumed that all nodes within a network perform coding operations. In energy-constrained systems, however, coding subgraphs should be chosen to control...... the number of coding nodes while maintaining throughput. In this paper, we explore the strategic use of network coding in the wireless packet erasure relay channel according to both throughput and energy metrics. In the relay channel, a single source communicates to a single sink through the aid of a half......-duplex relay. The fluid flow model is used to describe the case where both the source and the relay are coding, and Markov chain models are proposed to describe packet evolution if only the source or only the relay is coding. In addition to transmission energy, we take into account coding and reception...

  1. Johannitius (809-873 AD), a medieval physician, translator and author.

    Science.gov (United States)

    Dalfardi, Behnam; Daneshfard, Babak; Nezhad, Golnoush Sadat Mahmoudi

    2016-08-01

    The medieval physician, translator and author Abū Zayd Ḥunayn ibn Isḥāq al-'Ibādī, best known in the West as Johannitius, is considered the best translator of Greek texts, particularly medical writings, into Arabic. He made great inroads in the art of translation in the Islamic world. In addition to his own translations, Johannitius put significant effort into training pupils and passing knowledge about translation to succeeding generations. He was also a great writer, compiling over 100 books on different subjects, especially medical. Among his own works, the illustrious Kitab al-Ashr Maqalat fil-Ayn (Ten Treatises on the Eye) contains the oldest known illustration of the structure of the eye. It served as the primary source for Galen's theory of vision and subsequent use by Western scholars. © The Author(s) 2016.

  2. PERSONALITY TYPE AND TRANSLATION PERFORMANCE OF PERSIAN TRANSLATOR TRAINEES

    Directory of Open Access Journals (Sweden)

    Reza Shaki

    2017-09-01

    Full Text Available The study investigated the relationship between the personality typology of a sample of Iranian translation students and their translation quality in terms of expressive, appellative, and informative text types. The study also attempted to identify the personality types that can perform better in English to Persian translation of the three text types. For that purpose, the personality type and the translation quality of the participants was assessed using Myers-Briggs Type Indicator (MBTI personality test and translation quality assessment (TQA, respectively. The analysis of the data revealed that the personality type of the participants seemed relevant to the translation quality of all the text types. The translation quality of the participants with intuitive and thinking types was significantly better than the sensing type counterparts in translating expressive texts. The participants with intuitive and feeling types also performed better than their counterparts with sensing type in translation of the informative text. Moreover, the participants with intuitive, feeling, and thinking personality types performed more successfully than the participants with sensing type in translation of the appellative text. The findings of the study are discussed in light of the existing research literature.

  3. Knowledge translation to fitness trainers: A systematic review

    Directory of Open Access Journals (Sweden)

    Adamo Kristi B

    2010-04-01

    Full Text Available Abstract Background This study investigates approaches for translating evidence-based knowledge for use by fitness trainers. Specific questions were: Where do fitness trainers get their evidence-based information? What types of interventions are effective for translating evidence-based knowledge for use by fitness trainers? What are the barriers and facilitators to the use of evidence-based information by fitness trainers in their practice? Methods We describe a systematic review of studies about knowledge translation interventions targeting fitness trainers. Fitness trainers were defined as individuals who provide exercise program design and supervision services to the public. Nurses, physicians, physiotherapists, school teachers, athletic trainers, and sport team strength coaches were excluded. Results Of 634 citations, two studies were eligible for inclusion: a survey of 325 registered health fitness professionals (66% response rate and a qualitative study of 10 fitness instructors. Both studies identified that fitness trainers obtain information from textbooks, networking with colleagues, scientific journals, seminars, and mass media. Fitness trainers holding higher levels of education are reported to use evidence-based information sources such as scientific journals compared to those with lower education levels, who were reported to use mass media sources. The studies identified did not evaluate interventions to translate evidence-based knowledge for fitness trainers and did not explore factors influencing uptake of evidence in their practice. Conclusion Little is known about how fitness trainers obtain and incorporate new evidence-based knowledge into their practice. Further exploration and specific research is needed to better understand how emerging health-fitness evidence can be translated to maximize its use by fitness trainers providing services to the general public.

  4. Genome-Wide Search for Translated Upstream Open Reading Frames in Arabidopsis Thaliana.

    Science.gov (United States)

    Hu, Qiwen; Merchante, Catharina; Stepanova, Anna N; Alonso, Jose M; Heber, Steffen

    2016-03-01

    Upstream open reading frames (uORFs) are open reading frames that occur within the 5' UTR of an mRNA. uORFs have been found in many organisms. They play an important role in gene regulation, cell development, and in various metabolic processes. It is believed that translated uORFs reduce the translational efficiency of the main coding region. However, only few uORFs are experimentally characterized. In this paper, we use ribosome footprinting together with a semi-supervised approach based on stacking classification models to identify translated uORFs in Arabidopsis thaliana. Our approach identified 5360 potentially translated uORFs in 2051 genes. GO terms enriched in genes with translated uORFs include catalytic activity, binding, transferase activity, phosphotransferase activity, kinase activity, and transcription regulator activity. The reported uORFs occur with a higher frequency in multi-isoform genes, and some uORFs are affected by alternative transcript start sites or alternative splicing events. Association rule mining revealed sequence features associated with the translation status of the uORFs. We hypothesize that uORF translation is a complex process that might be regulated by multiple factors. The identified uORFs are available online at:https://www.dropbox.com/sh/zdutupedxafhly8/AABFsdNR5zDfiozB7B4igFcja?dl=0. This paper is the extended version of our research presented at ISBRA 2015.

  5. Systematic genomic and translational efficiency studies of uveal melanoma.

    Directory of Open Access Journals (Sweden)

    Chelsea Place Johnson

    Full Text Available To further our understanding of the somatic genetic basis of uveal melanoma, we sequenced the protein-coding regions of 52 primary tumors and 3 liver metastases together with paired normal DNA. Known recurrent mutations were identified in GNAQ, GNA11, BAP1, EIF1AX, and SF3B1. The role of mutated EIF1AX was tested using loss of function approaches including viability and translational efficiency assays. Knockdown of both wild type and mutant EIF1AX was lethal to uveal melanoma cells. We probed the function of N-terminal tail EIF1AX mutations by performing RNA sequencing of polysome-associated transcripts in cells expressing endogenous wild type or mutant EIF1AX. Ribosome occupancy of the global translational apparatus was sensitive to suppression of wild type but not mutant EIF1AX. Together, these studies suggest that cells expressing mutant EIF1AX may exhibit aberrant translational regulation, which may provide clonal selective advantage in the subset of uveal melanoma that harbors this mutation.

  6. Finding Translation Examples for Under-Resourced Language Pairs or for Narrow Domains; the Case for Machine Translation

    Directory of Open Access Journals (Sweden)

    Dan Tufis

    2012-07-01

    Full Text Available The cyberspace is populated with valuable information sources, expressed in about 1500 different languages and dialects. Yet, for the vast majority of WEB surfers this wealth of information is practically inaccessible or meaningless. Recent advancements in cross-lingual information retrieval, multilingual summarization, cross-lingual question answering and machine translation promise to narrow the linguistic gaps and lower the communication barriers between humans and/or software agents. Most of these language technologies are based on statistical machine learning techniques which require large volumes of cross lingual data. The most adequate type of cross-lingual data is represented by parallel corpora, collection of reciprocal translations. However, it is not easy to find enough parallel data for any language pair might be of interest. When required parallel data refers to specialized (narrow domains, the scarcity of data becomes even more acute. Intelligent information extraction techniques from comparable corpora provide one of the possible answers to this lack of translation data.

  7. Film Adaptation as Translation: An Analysis of Adaptation Shifts in Silver Linings Playbook

    Directory of Open Access Journals (Sweden)

    Katerina Perdikaki

    2017-12-01

    Full Text Available The purpose of this paper is to approach film adaptation as a modality of translation and to provide a systematic analysis of the changes occurring in the adaptation of a novel for the big screen. These changes, i.e. adaptation shifts, are examined by means of a model that consists of a descriptive/comparative component and an interpretive component. The model is derived from combining insights from adaptation and translation studies and thus builds on the interdisciplinary nature of adaptation studies so as to offer a comprehensive methodological tool for the analysis of adaptations. As processes and products, adaptation and translation involve an act of communication between a source and a target text within a new sociocultural context. In this light, adaptation can be examined as a case of intersemiotic translation in that it involves the transfer of meaning between two different media; in the case of film adaptation, more specifically, meaning is transferred from book to film and the dynamics between the source novel and adaptation is juxtaposed with that between a source text and its translation. The adaptation model is applied to the film adaptation Silver Linings Playbook with an aim to understand the aspects in which the adaptation differs from the source novel and the rationale behind the adaptation shifts. Finally, it is argued that such an analysis from a descriptive as well as an interpretive perspective can lead to a more holistic understanding of adaptation as a cultural phenomenon in the contemporary creative industries.

  8. Binary Systematic Network Coding for Progressive Packet Decoding

    OpenAIRE

    Jones, Andrew L.; Chatzigeorgiou, Ioannis; Tassi, Andrea

    2015-01-01

    We consider binary systematic network codes and investigate their capability of decoding a source message either in full or in part. We carry out a probability analysis, derive closed-form expressions for the decoding probability and show that systematic network coding outperforms conventional net- work coding. We also develop an algorithm based on Gaussian elimination that allows progressive decoding of source packets. Simulation results show that the proposed decoding algorithm can achieve ...

  9. Bit-wise arithmetic coding for data compression

    Science.gov (United States)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  10. Assessing the Quality of Persian Translation of Kite Runner based on House’s (2014 Functional Pragmatic Model

    Directory of Open Access Journals (Sweden)

    Fateme Kargarzadeh

    2017-03-01

    Full Text Available Translation quality assessment is at the heart of any theory of translation. It is used in the academic or teaching contexts to judge translations, to discuss their merits and demerits and to suggest solutions. However, literary translations needs more consideration in terms of quality and clarity as it is widely read form of translation. In this respect, Persian literary translation of Kite Runner was taken for investigation based on House’s (2014 functional pragmatic model of translation quality assessment. To this end, around 100 pages from the beginning of both English and Persian versions of the novel were selected and compared. Using House’s model, the profile of the source text register was created and the genre was recognized. The source text profile was compared to the translation text profile. The results were minute mismatches in field, tenor, and mode which accounted for as overt erroneous expressions and leading matches which were accounted for as covert translation. The mismatches were some mistranslations of tenses and selection of inappropriate meanings for the lexicon. Since the informal and culture specific terms were transferred thoroughly, the culture filter was not applied. Besides, as the translation was a covert one. The findings of the study have implications for translators, researchers and translator trainers.

  11. Making texts in electronic health records comprehensible to consumers: a prototype translator.

    Science.gov (United States)

    Zeng-Treitler, Qing; Goryachev, Sergey; Kim, Hyeoneui; Keselman, Alla; Rosendale, Douglas

    2007-10-11

    Narrative reports from electronic health records are a major source of content for personal health records. We designed and implemented a prototype text translator to make these reports more comprehensible to consumers. The translator identifies difficult terms, replaces them with easier synonyms, and generates and inserts explanatory texts for them. In feasibility testing, the application was used to translate 9 clinical reports. Majority (68.8%) of text replacements and insertions were deemed correct and helpful by expert review. User evaluation demonstrated a non-statistically significant trend toward better comprehension when translation is provided (p=0.15).

  12. The Semantics and Pragmatics of Translating Culture-Bound References in Film Dubbing

    Science.gov (United States)

    Bendus, Maryana

    2012-01-01

    This work deals with a number of issues relating to the multifaceted phenomenon of audiovisual translation. The primary concern of the dissertation is with the evaluation of translation strategies of extralinguistic culture-bound references, in particular, in films dubbed from English (as the source language) into Ukrainian (as the target…

  13. In the Beginning was a Mutualism - On the Origin of Translation

    Science.gov (United States)

    Vitas, Marko; Dobovišek, Andrej

    2018-04-01

    The origin of translation is critical for understanding the evolution of life, including the origins of life. The canonical genetic code is one of the most dominant aspects of life on this planet, while the origin of heredity is one of the key evolutionary transitions in living world. Why the translation apparatus evolved is one of the enduring mysteries of molecular biology. Assuming the hypothesis, that during the emergence of life evolution had to first involve autocatalytic systems which only subsequently acquired the capacity of genetic heredity, we propose and discuss possible mechanisms, basic aspects of the emergence and subsequent molecular evolution of translation and ribosomes, as well as enzymes as we know them today. It is possible, in this sense, to view the ribosome as a digital-to-analogue information converter. The proposed mechanism is based on the abilities and tendencies of short RNA and polypeptides to fold and to catalyse biochemical reactions. The proposed mechanism is in concordance with the hypothesis of a possible chemical co-evolution of RNA and proteins in the origin of the genetic code or even more generally at the early evolution of life on Earth. The possible abundance and availability of monomers at prebiotic conditions are considered in the mechanism. The hypothesis that early polypeptides were folding on the RNA scaffold is also considered and mutualism in molecular evolutionary development of RNA and peptides is favoured.

  14. Introducing Grounded Theory into translation studies | Wehrmeyer ...

    African Journals Online (AJOL)

    This article introduces tenets of Grounded Theory into a reception-oriented model for translation studies, in which the basis of comparison (tertium comparationis) between source and target texts is constructed from target audience expectancy norms. The model is primarily designed for projects where conformity to target ...

  15. A source of translationally cold molecular beams

    Science.gov (United States)

    Sarkozy, Laszlo C.

    Currently the fields studying or using molecules with low kinetic energies are experiencing an unprecedented growth. Astronomers and chemists are interested in chemical reactions taking place at temperatures below or around 20 K, spectroscopists could make very precise measurements on slow molecules and molecular physicists could chart the potential energy surfaces more accurately. And the list continues. All of these experiments need slow molecules, with kinetic energies from around 10 cm-1 down to 0. Several designs of cold sources have already been made. The most interesting ones are presented. This work describes the design and the testing of a cold source based on the collisional cooling technique: the molecules of interest are cooled well below their freezing point by a precooled buffer gas. This way condensation is avoided. The source is a copper cell cooled to 4.2 K by an external liquid helium bath. The cell is filled with cold buffer gas (helium). The molecules of choice (ammonia) are injected through a narrow tube in the middle of the cell. The cold molecules leave the cell through a 1 millimeter hole. Two versions of pulsing techniques have been employed: a shutter blade which covers the source hole and opens it only for short moments, and a chopper that modulates the beam further downstream. Both produced pulse lengths around 1 millisecond. The source is tested in an experiment in which the emerging molecules are focused and detected. Time of flight technique is used to measure the kinetic energies. Two detectors have been employed: a microwave cavity to analyze the state of the molecules in the beam, and a mass spectrometer to measure the number density of the particles. The molecules coming out of the source hole are formed into a beam by an electrostatic quadrupole state selector. The quantum mechanical aspects and the elements of electrodynamics involved in the focusing are described. A computer simulation program is presented, which helped

  16. Coding For Compression Of Low-Entropy Data

    Science.gov (United States)

    Yeh, Pen-Shu

    1994-01-01

    Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.

  17. DOE translation tool: Faster and better than ever

    International Nuclear Information System (INIS)

    El-Chakieh, T.; Vincent, C.

    2006-01-01

    CAE's constant push to advance power plant simulation practices involves continued investment in our technologies. This commitment has yielded many advances in our simulation technologies and tools to provide faster maintenance updates, easier process updates and higher fidelity models for power plant simulators. Through this quest, a comprehensive, self-contained and user-friendly DCS translation tool for plant control system emulation was created. The translation tool converts an ABB Advant AC160 and/or AC450 control system, used in both gas turbine-based, fossil and nuclear power plants, into Linux or Windows-based ROSE[reg] simulation schematics. The translation for a full combined-cycle gas turbine (CCGT) plant that comprises more than 5,300 function plans distributed over 15 nodes is processed in less than five hours on a dual 2.8Ghz Xeon Linux platform in comparison to the 12 hours required by CAE's previous translation tool. The translation process, using the plant configuration files, includes the parsing of the control algorithms, the databases, the graphic and the interconnection between nodes. A Linux or Windows API is then used to automatically populate the ROSE[reg] database. Without such a translation, tool or if ?stimulation? of real control system is not feasible or too costly, simulation of the DCS manually takes months of error prone manual coding. The translation can be performed for all the nodes constituting the configuration files of the whole plant DCS, or in order to provide faster maintenance updates and easier process updates, partial builds are possible at 3 levels: a. single schematic updates, b. multi-schematic updates and c. single node updates based on the user inputs into the Graphical User Interface. improvements including: - Process time reduction of over 60%; - All communication connections between nodes are fully automated; - New partial build for one schematic, a group of schematics or a single node; - Availability on PC

  18. Theoretical studies of field-reversed configurations (FRCs) and experimental study of the FRC during translation

    Energy Technology Data Exchange (ETDEWEB)

    Siemon, R.E.; Armstrong, W.T.; Chrien, R.E.; Klingner, P.L.; Linford, R.K.; McKenna, K.F.; Rej, D.J.; Schwarzmeier, J.L.; Sgro, A.; Sherwood, E.G.

    1984-08-01

    Theoretical studies of FRC stability and tranport are summarized. Finite Larmor radius theories are shown to be unreliable for explaining the experimentally observed stability to tilting. Control of the n=2 rotational instability has been demonstrated in 2-dimensional hybrid-code simulations, and the stability appears to be described within MHD if the nearly square equilibria that result from quadrupole fields are taken into account. Simulations of the lower-hybrid-drift instability in parameter regimes relevant to experiments show good agreement with a nonlocal theory of the instability. A 1.5-dimensional transport code shows agreement with the energy confinement time but disagreement with the flux loss time observed in FRX-C. The process of FRC translation in which the plasma is formed, translated into a dc solenoid, and trapped by magnetic mirrors has been studied in the FRX-C/T experiment.

  19. Theoretical studies of field-reversed configurations (FRCs) and experimental study of the FRC during translation

    International Nuclear Information System (INIS)

    Siemon, R.E.; Armstrong, W.T.; Chrien, R.E.

    1984-08-01

    Theoretical studies of FRC stability and tranport are summarized. Finite Larmor radius theories are shown to be unreliable for explaining the experimentally observed stability to tilting. Control of the n=2 rotational instability has been demonstrated in 2-dimensional hybrid-code simulations, and the stability appears to be described within MHD if the nearly square equilibria that result from quadrupole fields are taken into account. Simulations of the lower-hybrid-drift instability in parameter regimes relevant to experiments show good agreement with a nonlocal theory of the instability. A 1.5-dimensional transport code shows agreement with the energy confinement time but disagreement with the flux loss time observed in FRX-C. The process of FRC translation in which the plasma is formed, translated into a dc solenoid, and trapped by magnetic mirrors has been studied in the FRX-C/T experiment

  20. Difficulties in translation of socio-political texts

    Directory of Open Access Journals (Sweden)

    Артур Нарманович Мамедов

    2013-12-01

    Full Text Available Belonging of Russian socio-political texts to publicistic style assumes being guided by functional approach in order to find most adequate linguistic means by transfer of pragmatic meaning of the source text. Intralinguistic meaning can slightly remain by the interpretation of German texts. Lexical and grammatical transformations help preserving semantic-syntactic structure of the target text which means achievement of the same communicative effect by the translate which is being achieved by the source text.