WorldWideScience

Sample records for underlying process codes

  1. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  2. Synaptic E-I Balance Underlies Efficient Neural Coding.

    Science.gov (United States)

    Zhou, Shanglin; Yu, Yuguo

    2018-01-01

    Both theoretical and experimental evidence indicate that synaptic excitation and inhibition in the cerebral cortex are well-balanced during the resting state and sensory processing. Here, we briefly summarize the evidence for how neural circuits are adjusted to achieve this balance. Then, we discuss how such excitatory and inhibitory balance shapes stimulus representation and information propagation, two basic functions of neural coding. We also point out the benefit of adopting such a balance during neural coding. We conclude that excitatory and inhibitory balance may be a fundamental mechanism underlying efficient coding.

  3. Qualifying codes under software quality assurance: Two examples as guidelines for codes that are existing or under development

    Energy Technology Data Exchange (ETDEWEB)

    Mangold, D.

    1993-05-01

    Software quality assurance is an area of concem for DOE, EPA, and other agencies due to the poor quality of software and its documentation they have received in the past. This report briefly summarizes the software development concepts and terminology increasingly employed by these agencies and provides a workable approach to scientific programming under the new requirements. Following this is a practical description of how to qualify a simulation code, based on a software QA plan that has been reviewed and officially accepted by DOE/OCRWM. Two codes have recently been baselined and qualified, so that they can be officially used for QA Level 1 work under the DOE/OCRWM QA requirements. One of them was baselined and qualified within one week. The first of the codes was the multi-phase multi-component flow code TOUGH version 1, an already existing code, and the other was a geochemistry transport code STATEQ that was under development The way to accomplish qualification for both types of codes is summarized in an easy-to-follow step-by step fashion to illustrate how to baseline and qualify such codes through a relatively painless procedure.

  4. Qualifying codes under software quality assurance: Two examples as guidelines for codes that are existing or under development

    International Nuclear Information System (INIS)

    Mangold, D.

    1993-05-01

    Software quality assurance is an area of concern for DOE, EPA, and other agencies due to the poor quality of software and its documentation they have received in the past. This report briefly summarizes the software development concepts and terminology increasingly employed by these agencies and provides a workable approach to scientific programming under the new requirements. Following this is a practical description of how to qualify a simulation code, based on a software QA plan that has been reviewed and officially accepted by DOE/OCRWM. Two codes have recently been baselined and qualified, so that they can be officially used for QA Level 1 work under the DOE/OCRWM QA requirements. One of them was baselined and qualified within one week. The first of the codes was the multi-phase multi-component flow code TOUGH version 1, an already existing code, and the other was a geochemistry transport code STATEQ that was under development The way to accomplish qualification for both types of codes is summarized in an easy-to-follow step-by step fashion to illustrate how to baseline and qualify such codes through a relatively painless procedure

  5. CITOPP, CITMOD, CITWI, Processing codes for CITATION Code

    International Nuclear Information System (INIS)

    Albarhoum, M.

    2008-01-01

    Description of program or function: CITOPP processes the output file of the CITATION 3-D diffusion code. The program can plot axial, radial and circumferential flux distributions (in cylindrical geometry) in addition to the multiplication factor convergence. The flux distributions can be drawn for each group specified by the program and visualized on the screen. CITMOD processes both the output and the input files of the CITATION 3-D diffusion code. CITMOD can visualize both axial, and radial-angular models of the reactor described by CITATION input/output files. CITWI processes the input file (CIT.INP) of CITATION 3-D diffusion code. CIT.INP is processed to deduce the dimensions of the cell whose cross sections can be representative of the homonym reactor component in section 008 of CIT.INP

  6. Ensemble coding remains accurate under object and spatial visual working memory load.

    Science.gov (United States)

    Epstein, Michael L; Emmanouil, Tatiana A

    2017-10-01

    A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants' accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.

  7. The 1996 ENDF pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1996-01-01

    The codes are named 'the Pre-processing' codes, because they are designed to pre-process ENDF/B data, for later, further processing for use in applications. This is a modular set of computer codes, each of which reads and writes evaluated nuclear data in the ENDF/B format. Each code performs one or more independent operations on the data, as described below. These codes are designed to be computer independent, and are presently operational on every type of computer from large mainframe computer to small personal computers, such as IBM-PC and Power MAC. The codes are available from the IAEA Nuclear Data Section, free of charge upon request. (author)

  8. The 1989 ENDF pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.; McLaughlin, P.K.

    1989-12-01

    This document summarizes the 1989 version of the ENDF pre-processing codes which are required for processing evaluated nuclear data coded in the format ENDF-4, ENDF-5, or ENDF-6. The codes are available from the IAEA Nuclear Data Section, free of charge upon request. (author)

  9. The Coding Process and Its Challenges

    Directory of Open Access Journals (Sweden)

    Judith A. Holton, Ph.D.

    2010-02-01

    Full Text Available Coding is the core process in classic grounded theory methodology. It is through coding that the conceptual abstraction of data and its reintegration as theory takes place. There are two types of coding in a classic grounded theory study: substantive coding, which includes both open and selective coding procedures, and theoretical coding. In substantive coding, the researcher works with the data directly, fracturing and analysing it, initially through open coding for the emergence of a core category and related concepts and then subsequently through theoretical sampling and selective coding of data to theoretically saturate the core and related concepts. Theoretical saturation is achieved through constant comparison of incidents (indicators in the data to elicit the properties and dimensions of each category (code. This constant comparing of incidents continues until the process yields the interchangeability of indicators, meaning that no new properties or dimensions are emerging from continued coding and comparison. At this point, the concepts have achieved theoretical saturation and the theorist shifts attention to exploring the emergent fit of potential theoretical codes that enable the conceptual integration of the core and related concepts to produce hypotheses that account for relationships between the concepts thereby explaining the latent pattern of social behaviour that forms the basis of the emergent theory. The coding of data in grounded theory occurs in conjunction with analysis through a process of conceptual memoing, capturing the theorist’s ideation of the emerging theory. Memoing occurs initially at the substantive coding level and proceeds to higher levels of conceptual abstraction as coding proceeds to theoretical saturation and the theorist begins to explore conceptual reintegration through theoretical coding.

  10. Research on pre-processing of QR Code

    Science.gov (United States)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  11. Speech and audio processing for coding, enhancement and recognition

    CERN Document Server

    Togneri, Roberto; Narasimha, Madihally

    2015-01-01

    This book describes the basic principles underlying the generation, coding, transmission and enhancement of speech and audio signals, including advanced statistical and machine learning techniques for speech and speaker recognition with an overview of the key innovations in these areas. Key research undertaken in speech coding, speech enhancement, speech recognition, emotion recognition and speaker diarization are also presented, along with recent advances and new paradigms in these areas. ·         Offers readers a single-source reference on the significant applications of speech and audio processing to speech coding, speech enhancement and speech/speaker recognition. Enables readers involved in algorithm development and implementation issues for speech coding to understand the historical development and future challenges in speech coding research; ·         Discusses speech coding methods yielding bit-streams that are multi-rate and scalable for Voice-over-IP (VoIP) Networks; ·     �...

  12. Development of process simulation code for reprocessing plant and process analysis for solvent degradation and solvent washing waste

    International Nuclear Information System (INIS)

    Tsukada, Tsuyoshi; Takahashi, Keiki

    1999-01-01

    We developed a process simulation code for an entire nuclear fuel reprocessing plant. The code can be used on a PC. Almost all of the equipment in the reprocessing plant is included in the code and the mass balance model of each item of equipment is based on the distribution factors of flow-out streams. All models are connected between the outlet flow and the inlet flow according to the process flow sheet. We estimated the amount of DBP from TBP degradation in the entire process by using the developed code. Most of the DBP is generated in the Pu refining process by the effect of α radiation from Pu, which is extracted in a solvent. On the other hand, very little of DBP is generated in the U refining process. We therefore propose simplification of the solvent washing process and volume reduction of the alkali washing waste in the U refining process. The first Japanese commercial reprocessing plant is currently under construction at Rokkasho Mura, Recently, for the sake of process simplification, the original process design has been changed. Using our code, we analyzed the original process and the simplified process. According our results, the volume of alkali waste solution in the low-level liquid treatment process will be reduced by half in the simplified process. (author)

  13. Code-Mixing and Code Switchingin The Process of Learning

    Directory of Open Access Journals (Sweden)

    Diyah Atiek Mustikawati

    2016-09-01

    Full Text Available This study aimed to describe a form of code switching and code mixing specific form found in the teaching and learning activities in the classroom as well as determining factors influencing events stand out that form of code switching and code mixing in question.Form of this research is descriptive qualitative case study which took place in Al Mawaddah Boarding School Ponorogo. Based on the analysis and discussion that has been stated in the previous chapter that the form of code mixing and code switching learning activities in Al Mawaddah Boarding School is in between the use of either language Java language, Arabic, English and Indonesian, on the use of insertion of words, phrases, idioms, use of nouns, adjectives, clauses, and sentences. Code mixing deciding factor in the learning process include: Identification of the role, the desire to explain and interpret, sourced from the original language and its variations, is sourced from a foreign language. While deciding factor in the learning process of code, includes: speakers (O1, partners speakers (O2, the presence of a third person (O3, the topic of conversation, evoke a sense of humour, and just prestige. The significance of this study is to allow readers to see the use of language in a multilingual society, especially in AL Mawaddah boarding school about the rules and characteristics variation in the language of teaching and learning activities in the classroom. Furthermore, the results of this research will provide input to the ustadz / ustadzah and students in developing oral communication skills and the effectiveness of teaching and learning strategies in boarding schools.

  14. Parallel processing of structural integrity analysis codes

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.

    1996-01-01

    Structural integrity analysis forms an important role in assessing and demonstrating the safety of nuclear reactor components. This analysis is performed using analytical tools such as Finite Element Method (FEM) with the help of digital computers. The complexity of the problems involved in nuclear engineering demands high speed computation facilities to obtain solutions in reasonable amount of time. Parallel processing systems such as ANUPAM provide an efficient platform for realising the high speed computation. The development and implementation of software on parallel processing systems is an interesting and challenging task. The data and algorithm structure of the codes plays an important role in exploiting the parallel processing system capabilities. Structural analysis codes based on FEM can be divided into two categories with respect to their implementation on parallel processing systems. The first category codes such as those used for harmonic analysis, mechanistic fuel performance codes need not require the parallelisation of individual modules of the codes. The second category of codes such as conventional FEM codes require parallelisation of individual modules. In this category, parallelisation of equation solution module poses major difficulties. Different solution schemes such as domain decomposition method (DDM), parallel active column solver and substructuring method are currently used on parallel processing systems. Two codes, FAIR and TABS belonging to each of these categories have been implemented on ANUPAM. The implementation details of these codes and the performance of different equation solvers are highlighted. (author). 5 refs., 12 figs., 1 tab

  15. The 1992 ENDF Pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1992-01-01

    This document summarizes the 1992 version of the ENDF pre-processing codes which are required for processing evaluated nuclear data coded in the format ENDF-4, ENDF-5, or ENDF-6. Included are the codes CONVERT, MERGER, LINEAR, RECENT, SIGMA1, LEGEND, FIXUP, GROUPIE, DICTION, MIXER, VIRGIN, COMPLOT, EVALPLOT, RELABEL. Some of the functions of these codes are: to calculate cross-sections from resonance parameters; to calculate angular distributions, group average, mixtures of cross-sections, etc; to produce graphical plottings and data comparisons. The codes are designed to operate on virtually any type of computer including PC's. They are available from the IAEA Nuclear Data Section, free of charge upon request, on magnetic tape or a set of HD diskettes. (author)

  16. Process-engineering control valves under the EC codes; Steuerventile fuer die Prozesstechnik im Geltungsbereich der EG-Richtlinien

    Energy Technology Data Exchange (ETDEWEB)

    Gohlke, B. [IMI Norgren Herion Fluidtronic GmbH und Co. KG, Fellbach (Germany)

    2003-09-01

    The European Parliament and European Council have enacted special codes in order to implement uniform conditions in all countries of the European Community. The manufacturers of technical and commercial products are obliged to adhere to these codes. Harmonized standards, which are to be used as a tool for the implementation of the codes, are embedded at another level of the overall 'European reference literature'. Two EC codes, in particular, are definitive for fluids engineering: On the one hand, the EC Machinery Code, 98/37/EC and, on the other hand, the EC Pressurized Equipment Code, 97/23/EC. These EC codes cover, inter alia, machinery and chemical process-engineering plants, and conventional power generating plants. Norgren-Herion, a manufacturer of fluid engineering components, perceived a necessity for positioning its control valves in the scope of applicability of the EC codes. This article describes experience with the EC codes from the control valve manufacturer's point of view and examines the various qualification procedures for control valves. (orig.)

  17. Covariance data processing code. ERRORJ

    International Nuclear Information System (INIS)

    Kosako, Kazuaki

    2001-01-01

    The covariance data processing code, ERRORJ, was developed to process the covariance data of JENDL-3.2. ERRORJ has the processing functions of covariance data for cross sections including resonance parameters, angular distribution and energy distribution. (author)

  18. User's manual for a process model code

    International Nuclear Information System (INIS)

    Kern, E.A.; Martinez, D.P.

    1981-03-01

    The MODEL code has been developed for computer modeling of materials processing facilities associated with the nuclear fuel cycle. However, it can also be used in other modeling applications. This report provides sufficient information for a potential user to apply the code to specific process modeling problems. Several examples that demonstrate most of the capabilities of the code are provided

  19. Cell-assembly coding in several memory processes.

    Science.gov (United States)

    Sakurai, Y

    1998-01-01

    The present paper discusses why the cell assembly, i.e., an ensemble population of neurons with flexible functional connections, is a tenable view of the basic code for information processes in the brain. The main properties indicating the reality of cell-assembly coding are neurons overlaps among different assemblies and connection dynamics within and among the assemblies. The former can be detected as multiple functions of individual neurons in processing different kinds of information. Individual neurons appear to be involved in multiple information processes. The latter can be detected as changes of functional synaptic connections in processing different kinds of information. Correlations of activity among some of the recorded neurons appear to change in multiple information processes. Recent experiments have compared several different memory processes (tasks) and detected these two main properties, indicating cell-assembly coding of memory in the working brain. The first experiment compared different types of processing of identical stimuli, i.e., working memory and reference memory of auditory stimuli. The second experiment compared identical processes of different types of stimuli, i.e., discriminations of simple auditory, simple visual, and configural auditory-visual stimuli. The third experiment compared identical processes of different types of stimuli with or without temporal processing of stimuli, i.e., discriminations of elemental auditory, configural auditory-visual, and sequential auditory-visual stimuli. Some possible features of the cell-assembly coding, especially "dual coding" by individual neurons and cell assemblies, are discussed for future experimental approaches. Copyright 1998 Academic Press.

  20. Comparative study of Monte Carlo particle transport code PHITS and nuclear data processing code NJOY for PKA energy spectra and heating number under neutron irradiation

    International Nuclear Information System (INIS)

    Iwamoto, Y.; Ogawa, T.

    2016-01-01

    The modelling of the damage in materials irradiated by neutrons is needed for understanding the mechanism of radiation damage in fission and fusion reactor facilities. The molecular dynamics simulations of damage cascades with full atomic interactions require information about the energy distribution of the Primary Knock on Atoms (PKAs). The most common process to calculate PKA energy spectra under low-energy neutron irradiation is to use the nuclear data processing code NJOY2012. It calculates group-to-group recoil cross section matrices using nuclear data libraries in ENDF data format, which is energy and angular recoil distributions for many reactions. After the NJOY2012 process, SPKA6C is employed to produce PKA energy spectra combining recoil cross section matrices with an incident neutron energy spectrum. However, intercomparison with different processes and nuclear data libraries has not been studied yet. Especially, the higher energy (~5 MeV) of the incident neutrons, compared to fission, leads to many reaction channels, which produces a complex distribution of PKAs in energy and type. Recently, we have developed the event generator mode (EGM) in the Particle and Heavy Ion Transport code System PHITS for neutron incident reactions in the energy region below 20 MeV. The main feature of EGM is to produce PKA with keeping energy and momentum conservation in a reaction. It is used for event-by-event analysis in application fields such as soft error analysis in semiconductors, micro dosimetry in human body, and estimation of Displacement per Atoms (DPA) value in metals and so on. The purpose of this work is to specify differences of PKA spectra and heating number related with kerma between different calculation method using PHITS-EGM and NJOY2012+SPKA6C with different libraries TENDL-2015, ENDF/B-VII.1 and JENDL-4.0 for fusion relevant materials

  1. Bilingual processing of ASL-English code-blends: The consequences of accessing two lexical representations simultaneously

    OpenAIRE

    Emmorey, Karen; Petrich, Jennifer; Gollan, Tamar H.

    2012-01-01

    Bilinguals who are fluent in American Sign Language (ASL) and English often produce code-blends - simultaneously articulating a sign and a word while conversing with other ASL-English bilinguals. To investigate the cognitive mechanisms underlying code-blend processing, we compared picture-naming times (Experiment 1) and semantic categorization times (Experiment 2) for code-blends versus ASL signs and English words produced alone. In production, code-blending did not slow lexical retrieval for...

  2. Pre-processing of input files for the AZTRAN code

    International Nuclear Information System (INIS)

    Vargas E, S.; Ibarra, G.

    2017-09-01

    The AZTRAN code began to be developed in the Nuclear Engineering Department of the Escuela Superior de Fisica y Matematicas (ESFM) of the Instituto Politecnico Nacional (IPN) with the purpose of numerically solving various models arising from the physics and engineering of nuclear reactors. The code is still under development and is part of the AZTLAN platform: Development of a Mexican platform for the analysis and design of nuclear reactors. Due to the complexity to generate an input file for the code, a script based on D language is developed, with the purpose of making its elaboration easier, based on a new input file format which includes specific cards, which have been divided into two blocks, mandatory cards and optional cards, including a pre-processing of the input file to identify possible errors within it, as well as an image generator for the specific problem based on the python interpreter. (Author)

  3. Single-Trial Evoked Potential Estimating Based on Sparse Coding under Impulsive Noise Environment

    Directory of Open Access Journals (Sweden)

    Nannan Yu

    2018-01-01

    Full Text Available Estimating single-trial evoked potentials (EPs corrupted by the spontaneous electroencephalogram (EEG can be regarded as signal denoising problem. Sparse coding has significant success in signal denoising and EPs have been proven to have strong sparsity over an appropriate dictionary. In sparse coding, the noise generally is considered to be a Gaussian random process. However, some studies have shown that the background noise in EPs may present an impulsive characteristic which is far from Gaussian but suitable to be modeled by the α-stable distribution 1<α≤2. Consequently, the performances of general sparse coding will degrade or even fail. In view of this, we present a new sparse coding algorithm using p-norm optimization in single-trial EPs estimating. The algorithm can track the underlying EPs corrupted by α-stable distribution noise, trial-by-trial, without the need to estimate the α value. Simulations and experiments on human visual evoked potentials and event-related potentials are carried out to examine the performance of the proposed approach. Experimental results show that the proposed method is effective in estimating single-trial EPs under impulsive noise environment.

  4. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  5. Description of ground motion data processing codes: Volume 3

    International Nuclear Information System (INIS)

    Sanders, M.L.

    1988-02-01

    Data processing codes developed to process ground motion at the Nevada Test Site for the Weapons Test Seismic Investigations Project are used today as part of the program to process ground motion records for the Nevada Nuclear Waste Storage Investigations Project. The work contained in this report documents and lists codes and verifies the ''PSRV'' code. 39 figs

  6. Enhancement of the SPARC90 code to pool scrubbing events under jet injection regime

    Energy Technology Data Exchange (ETDEWEB)

    Berna, C., E-mail: ceberes@iie.upv.es [Instituto de Ingeniería Energética, Universitat Politècnica de València (UPV), Camino de Vera 14, 46022 Valencia (Spain); Escrivá, A.; Muñoz-Cobo, J.L. [Instituto de Ingeniería Energética, Universitat Politècnica de València (UPV), Camino de Vera 14, 46022 Valencia (Spain); Herranz, L.E., E-mail: luisen.herranz@ciemat.es [Unit of Nuclear Safety Research Division of Nuclear Fission, CIEMAT, Avda. Complutense 22, 28040 Madrid (Spain)

    2016-04-15

    Highlights: • Review of the most recent literature concerning submerged jets. • Emphasize all variables and processes occurring along the jet region. • Highlight the gaps of knowledge still existing related to submerged jets. • Enhancement of SPARC90-Jet to estimate aerosol removal under jet injection regime. • Validation of the SPARC90-Jet results against pool scrubbing experimental data. - Abstract: Submerged gaseous jets may have an outstanding relevance in many industrial processes and may be of particular significance in severe nuclear accident scenarios, like in the Fukushima accident. Even though pool scrubbing has been traditionally associated with low injection velocities, there are a number of potential scenarios in which fission product trapping in aqueous ponds might also occur under jet injection regime (like SGTR meltdown sequences in PWRs and SBO ones in BWRs). The SPARC90 code was developed to determine the fission product trapping in pools during severe accidents. The code assumes that carrier gas arrives at the water ponds at low or moderate velocities and it forms a big bubble that eventually detaches from the injection pipe. However, particle laden gases may enter the water at very high velocities resulting in a submerged gas jet instead. This work presents the fundamentals, major hypotheses and changes introduced into the code in order to estimate particle removal during gas injection in pools under the jet regime (SPARC90-Jet). A simplified and reliable approach to submerged jet hydrodynamics has been implemented on the basis of updated equations for jet hydrodynamics and aerosol removal, so that gas–liquid and droplet-particles interactions are described. The code modifications have been validated as far as possible. However, no suitable hydrodynamic tests have been found in the literature, so that an indirect validation has been conducted through comparisons against data from pool scrubbing experiments. Besides, this validation

  7. Enhancement of the SPARC90 code to pool scrubbing events under jet injection regime

    International Nuclear Information System (INIS)

    Berna, C.; Escrivá, A.; Muñoz-Cobo, J.L.; Herranz, L.E.

    2016-01-01

    Highlights: • Review of the most recent literature concerning submerged jets. • Emphasize all variables and processes occurring along the jet region. • Highlight the gaps of knowledge still existing related to submerged jets. • Enhancement of SPARC90-Jet to estimate aerosol removal under jet injection regime. • Validation of the SPARC90-Jet results against pool scrubbing experimental data. - Abstract: Submerged gaseous jets may have an outstanding relevance in many industrial processes and may be of particular significance in severe nuclear accident scenarios, like in the Fukushima accident. Even though pool scrubbing has been traditionally associated with low injection velocities, there are a number of potential scenarios in which fission product trapping in aqueous ponds might also occur under jet injection regime (like SGTR meltdown sequences in PWRs and SBO ones in BWRs). The SPARC90 code was developed to determine the fission product trapping in pools during severe accidents. The code assumes that carrier gas arrives at the water ponds at low or moderate velocities and it forms a big bubble that eventually detaches from the injection pipe. However, particle laden gases may enter the water at very high velocities resulting in a submerged gas jet instead. This work presents the fundamentals, major hypotheses and changes introduced into the code in order to estimate particle removal during gas injection in pools under the jet regime (SPARC90-Jet). A simplified and reliable approach to submerged jet hydrodynamics has been implemented on the basis of updated equations for jet hydrodynamics and aerosol removal, so that gas–liquid and droplet-particles interactions are described. The code modifications have been validated as far as possible. However, no suitable hydrodynamic tests have been found in the literature, so that an indirect validation has been conducted through comparisons against data from pool scrubbing experiments. Besides, this validation

  8. Data processing with microcode designed with source coding

    Science.gov (United States)

    McCoy, James A; Morrison, Steven E

    2013-05-07

    Programming for a data processor to execute a data processing application is provided using microcode source code. The microcode source code is assembled to produce microcode that includes digital microcode instructions with which to signal the data processor to execute the data processing application.

  9. Investigating the Simulink Auto-Coding Process

    Science.gov (United States)

    Gualdoni, Matthew J.

    2016-01-01

    Model based program design is the most clear and direct way to develop algorithms and programs for interfacing with hardware. While coding "by hand" results in a more tailored product, the ever-growing size and complexity of modern-day applications can cause the project work load to quickly become unreasonable for one programmer. This has generally been addressed by splitting the product into separate modules to allow multiple developers to work in parallel on the same project, however this introduces new potentials for errors in the process. The fluidity, reliability and robustness of the code relies on the abilities of the programmers to communicate their methods to one another; furthermore, multiple programmers invites multiple potentially differing coding styles into the same product, which can cause a loss of readability or even module incompatibility. Fortunately, Mathworks has implemented an auto-coding feature that allows programmers to design their algorithms through the use of models and diagrams in the graphical programming environment Simulink, allowing the designer to visually determine what the hardware is to do. From here, the auto-coding feature handles converting the project into another programming language. This type of approach allows the designer to clearly see how the software will be directing the hardware without the need to try and interpret large amounts of code. In addition, it speeds up the programming process, minimizing the amount of man-hours spent on a single project, thus reducing the chance of human error as well as project turnover time. One such project that has benefited from the auto-coding procedure is Ramses, a portion of the GNC flight software on-board Orion that has been implemented primarily in Simulink. Currently, however, auto-coding Ramses into C++ requires 5 hours of code generation time. This causes issues if the tool ever needs to be debugged, as this code generation will need to occur with each edit to any part of

  10. Arabic Natural Language Processing System Code Library

    Science.gov (United States)

    2014-06-01

    Adelphi, MD 20783-1197 This technical note provides a brief description of a Java library for Arabic natural language processing ( NLP ) containing code...for training and applying the Arabic NLP system described in the paper "A Cross-Task Flexible Transition Model for Arabic Tokenization, Affix...and also English) natural language processing ( NLP ), containing code for training and applying the Arabic NLP system described in Stephen Tratz’s

  11. Applicability of vector processing to large-scale nuclear codes

    International Nuclear Information System (INIS)

    Ishiguro, Misako; Harada, Hiroo; Matsuura, Toshihiko; Okuda, Motoi; Ohta, Fumio; Umeya, Makoto.

    1982-03-01

    To meet the growing trend of computational requirements in JAERI, introduction of a high-speed computer with vector processing faculty (a vector processor) is desirable in the near future. To make effective use of a vector processor, appropriate optimization of nuclear codes to pipelined-vector architecture is vital, which will pose new problems concerning code development and maintenance. In this report, vector processing efficiency is assessed with respect to large-scale nuclear codes by examining the following items: 1) The present feature of computational load in JAERI is analyzed by compiling the computer utilization statistics. 2) Vector processing efficiency is estimated for the ten heavily-used nuclear codes by analyzing their dynamic behaviors run on a scalar machine. 3) Vector processing efficiency is measured for the other five nuclear codes by using the current vector processors, FACOM 230-75 APU and CRAY-1. 4) Effectiveness of applying a high-speed vector processor to nuclear codes is evaluated by taking account of the characteristics in JAERI jobs. Problems of vector processors are also discussed from the view points of code performance and ease of use. (author)

  12. ERRORJ. Covariance processing code. Version 2.2

    International Nuclear Information System (INIS)

    Chiba, Go

    2004-07-01

    ERRORJ is the covariance processing code that can produce covariance data of multi-group cross sections, which are essential for uncertainty analyses of nuclear parameters, such as neutron multiplication factor. The ERRORJ code can process the covariance data of cross sections including resonance parameters, angular and energy distributions of secondary neutrons. Those covariance data cannot be processed by the other covariance processing codes. ERRORJ has been modified and the version 2.2 has been developed. This document describes the modifications and how to use. The main topics of the modifications are as follows. Non-diagonal elements of covariance matrices are calculated in the resonance energy region. Option for high-speed calculation is implemented. Perturbation amount is optimized in a sensitivity calculation. Effect of the resonance self-shielding on covariance of multi-group cross section can be considered. It is possible to read a compact covariance format proposed by N.M. Larson. (author)

  13. SSYST. A code system to analyze LWR fuel rod behavior under accident conditions

    International Nuclear Information System (INIS)

    Gulden, W.; Meyder, R.; Borgwaldt, H.

    1982-01-01

    SSYST (Safety SYSTem) is a modular system to analyze the behavior of light water reactor fuel rods and fuel rod simulators under accident conditions. It has been developed in close cooperation between Kernforschungszentrum Karlsruhe (KfK) and the Institut fuer Kerntechnik und Energiewandlung (IKE), University Stuttgart, under contract of Projekt Nukleare Sicherheit (PNS) at KfK. Although originally aimed at single rod analysis, features are available to calculate effects such as blockage ratios of bundles and wholes cores. A number of inpile and out-of-pile experiments were used to assess the system. Main differences versus codes like FRAP-T with similar applications are (1) an open-ended modular code organisation, (2) availability of modules of different sophistication levels for the same physical processes, and (3) a preference for simple models, wherever possible. The first feature makes SSYST a very flexible tool, easily adapted to changing requirements; the second enables the user to select computational models adequate to the significance of the physical process. This leads together with the third feature to short execution times. The analysis of transient rod behavior under LOCA boundary conditions e.g. takes 2 mins cpu-time (IBM-3033), so that extensive parametric studies become possible

  14. Consistent Code Qualification Process and Application to WWER-1000 NPP

    International Nuclear Information System (INIS)

    Berthon, A.; Petruzzi, A.; Giannotti, W.; D'Auria, F.; Reventos, F.

    2006-01-01

    Calculation analysis by application of the system codes are performed to evaluate the NPP or the facility behavior during a postulated transient or to evaluate the code capability. The calculation analysis constitutes a process that involves the code itself, the data of the reference plant, the data about the transient, the nodalization, and the user. All these elements affect one each other and affect the results. A major issue in the use of mathematical model is constituted by the model capability to reproduce the plant or facility behavior under steady state and transient conditions. These aspects constitute two main checks that must be satisfied during the qualification process. The first of them is related to the realization of a scheme of the reference plant; the second one is related to the capability to reproduce the transient behavior. The aim of this paper is to describe the UMAE (Uncertainty Method based on Accuracy Extrapolation) methodology developed at University of Pisa for qualifying a nodalization and analysing the calculated results and to perform the uncertainty evaluation of the system code by the CIAU code (Code with the capability of Internal Assessment of Uncertainty). The activity consists with the re-analysis of the Experiment BL-44 (SBLOCA) performed in the LOBI facility and the analysis of a Kv-scaling calculation of the WWER-1000 NPP nodalization taking as reference the test BL-44. Relap5/Mod3.3 has been used as thermal-hydraulic system code and the standard procedure adopted at University of Pisa has been applied to show the capability of the code to predict the significant aspects of the transient and to obtain a qualified nodalization of the WWER-1000 through a systematic qualitative and quantitative accuracy evaluation. The qualitative accuracy evaluation is based on the selection of Relevant Thermal-hydraulic Aspects (RTAs) and is a prerequisite to the application of the Fast Fourier Transform Based Method (FFTBM) which quantifies

  15. Data processing codes for fatigue and tensile tests

    International Nuclear Information System (INIS)

    Sanchez Sarmiento, Gustavo; Iorio, A.F.; Crespi, J.C.

    1981-01-01

    The processing of fatigue and tensile tests data in order to obtain several parameters of engineering interest requires a considerable effort of numerical calculus. In order to reduce the time spent in this work and to establish standard data processing from a set of similar type tests, it is very advantageous to have a calculation code for running in a computer. Two codes have been developed in FORTRAN language; one of them predicts cyclic properties of materials from the monotonic and incremental or multiple cyclic step tests (ENSPRED CODE), and the other one reduces data coming from strain controlled low cycle fatigue tests (ENSDET CODE). Two examples are included using Zircaloy-4 material from different manufacturers. (author) [es

  16. MRPP: multiregion processing plant code

    International Nuclear Information System (INIS)

    Kee, C.W.; McNeese, L.E.

    1976-09-01

    The report describes the machine solution of a large number (approximately 52,000) of simultaneous linear algebraic equations in which the unknowns are the concentrations of nuclides in the fuel salt of a fluid-fueled reactor (MSBR) having a continuous fuel processing plant. Most of the equations define concentrations at various points in the processing plant. The code allows as input a generalized description of a processing plant flowsheet; it also performs the iterative adjustment of flowsheet parameters for determination of concentrations throughout the flowsheet, and the associated effect of the specified processing mode on the overall reactor operation

  17. Adaptive under relaxation factor of MATRA code for the efficient whole core analysis

    International Nuclear Information System (INIS)

    Kwon, Hyuk; Kim, S. J.; Seo, K. W.; Hwang, D. H.

    2013-01-01

    Such nonlinearities are handled in MATRA code using outer iteration with Picard scheme. The Picard scheme involves successive updating of the coefficient matrix based on the previously calculated values. The scheme is a simple and effective method for the nonlinear problem but the effectiveness greatly depends on the under-relaxing capability. Accuracy and speed of calculation are very sensitively dependent on the under-relaxation factor in outer-iteration updating the axial mass flow using the continuity equation. The under-relaxation factor in MATRA is generally utilized with a fixed value that is empirically determined. Adapting the under-relaxation factor to the outer iteration is expected to improve the calculation effectiveness of MATRA code rather than calculation with the fixed under-relaxation factor. The present study describes the implementation of adaptive under-relaxation within the subchannel code MATRA. Picard iterations with adaptive under-relaxation can accelerate the convergence for mass conservation in subchannel code MATRA. The most efficient approach for adaptive under relaxation appears to be very problem dependent

  18. Calculation code of mass and heat transfer in a pulsed column for Purex process

    International Nuclear Information System (INIS)

    Tsukada, Takeshi; Takahashi, Keiki

    1993-01-01

    A calculation code for extraction behavior analysis in a pulsed column employed at an extraction process of a reprocessing plant was developed. This code was also combined with our previously developed calculation code for axial temperature profiles in a pulsed column. The one-dimensional dispersion model was employed for both of the extraction behavior analysis and the axial temperature profile analysis. The reported values of the fluid characteristics coefficient, the transfer coefficient and the diffusivities in the pulsed column were used. The calculated concentration profiles of HNO 3 , U and Pu for the steady state have a good agreement with the reported experimental results. The concentration and temperature profiles were calculated under the operation conditions which induce the abnormal U extraction behavior, i.e. U extraction zone is moved to the bottom of the column. Thought there is slight difference between calculated and experimental value, it is appeared that our developed code can be applied to the simulation under the normal operation condition and the relatively slowly transient condition. Pu accumulation phenomena was analyzed with this code and the accumulation tendency is similar to the reported analysis results. (author)

  19. Performance analysis of linear codes under maximum-likelihood decoding: a tutorial

    National Research Council Canada - National Science Library

    Sason, Igal; Shamai, Shlomo

    2006-01-01

    ..., upper and lower bounds on the error probability of linear codes under ML decoding are surveyed and applied to codes and ensembles of codes on graphs. For upper bounds, we discuss various bounds where focus is put on Gallager bounding techniques and their relation to a variety of other reported bounds. Within the class of lower bounds, we ad...

  20. Electronic data processing codes for California wildland plants

    Science.gov (United States)

    Merton J. Reed; W. Robert Powell; Bur S. Bal

    1963-01-01

    Systematized codes for plant names are helpful to a wide variety of workers who must record the identity of plants in the field. We have developed such codes for a majority of the vascular plants encountered on California wildlands and have published the codes in pocket size, using photo-reductions of the output from data processing machines. A limited number of the...

  1. Memory for pictures and words as a function of level of processing: Depth or dual coding?

    Science.gov (United States)

    D'Agostino, P R; O'Neill, B J; Paivio, A

    1977-03-01

    The experiment was designed to test differential predictions derived from dual-coding and depth-of-processing hypotheses. Subjects under incidental memory instructions free recalled a list of 36 test events, each presented twice. Within the list, an equal number of events were assigned to structural, phonemic, and semantic processing conditions. Separate groups of subjects were tested with a list of pictures, concrete words, or abstract words. Results indicated that retention of concrete words increased as a direct function of the processing-task variable (structural memory performance. These data provided strong support for the dual-coding model.

  2. Progress on China nuclear data processing code system

    Science.gov (United States)

    Liu, Ping; Wu, Xiaofei; Ge, Zhigang; Li, Songyang; Wu, Haicheng; Wen, Lili; Wang, Wenming; Zhang, Huanyu

    2017-09-01

    China is developing the nuclear data processing code Ruler, which can be used for producing multi-group cross sections and related quantities from evaluated nuclear data in the ENDF format [1]. The Ruler includes modules for reconstructing cross sections in all energy range, generating Doppler-broadened cross sections for given temperature, producing effective self-shielded cross sections in unresolved energy range, calculating scattering cross sections in thermal energy range, generating group cross sections and matrices, preparing WIMS-D format data files for the reactor physics code WIMS-D [2]. Programming language of the Ruler is Fortran-90. The Ruler is tested for 32-bit computers with Windows-XP and Linux operating systems. The verification of Ruler has been performed by comparison with calculation results obtained by the NJOY99 [3] processing code. The validation of Ruler has been performed by using WIMSD5B code.

  3. Post-processing of the TRAC code's results

    International Nuclear Information System (INIS)

    Baron, J.H.; Neuman, D.

    1987-01-01

    The TRAC code serves for the analysis of accidents in nuclear installations from the thermohydraulic point of view. A program has been developed with the aim of processing the information rapidly generated by the code, with screening graph capacity, both in high and low resolution, or either in paper through printer or plotter. Although the programs are intended to be used after the TRAC runs, they may be also used even when the program is running so as to observe the calculation process. The advantages of employing this type of tool, its actual capacity and its possibilities of expansion according to the user's needs are herein described. (Author)

  4. Summary of ENDF/B pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1981-12-01

    This document contains the summary documentation for the ENDF/B pre-processing codes: LINEAR, RECENT, SIGMA1, GROUPIE, EVALPLOT, MERGER, DICTION, CONVERT. This summary documentation is merely a copy of the comment cards that appear at the beginning of each programme; these comment cards always reflect the latest status of input options, etc. For the latest published documentation on the methods used in these codes see UCRL-50400, Vol.17 parts A-E, Lawrence Livermore Laboratory (1979)

  5. The status of simulation codes for extraction process using mixer-settler

    Energy Technology Data Exchange (ETDEWEB)

    Byeon, Kee Hoh; Lee, Eil Hee; Kwon, Seong Gil; Kim, Kwang Wook; Yang, Han Beom; Chung, Dong Yong; Lim, Jae Kwan; Shin, Hyun Kyoo; Kim, Soo Ho

    1999-10-01

    We have studied and analyzed the mixer-settler simulation codes such as three kinds of SEPHIS series, PUBG, and EXTRA.M, which is the most recently developed code. All of these are sufficiently satisfactory codes in the fields of process/device modeling, but it is necessary to formulate the accurate distribution data and chemical reaction mechanism for the aspect of accuracy and reliability. In the aspect of application to be the group separation process, the mixer-settler model of these codes have no problems, but the accumulation and formulation of partitioning and reaction equilibrium data of chemical elements used in group separation process is very important. (author)

  6. The enhanced variance propagation code for the Idaho Chemical Processing Plant

    International Nuclear Information System (INIS)

    Kern, E.A.; Zack, N.R.; Britschgi, J.J.

    1992-01-01

    The Variance Propagation (VP) Code was developed by the Los Alamos National Laboratory's Safeguard's Systems Group to provide off-line variance propagation and systems analysis for nuclear material processing facilities. The code can also be used as a tool in the design and evaluation of material accounting systems. In this regard , the VP code was enhanced to incorporate a model of the material accountability measurements used in the Idaho Chemical Processing Plant operated by the Westinghouse Idaho Nuclear Company. Inputs to the code were structured to account for the dissolves/headend process, the waste streams, process performed to determine the sensitivity of measurement and sampling errors to the overall material balance error. We determined that the material balance error is very sensitive to changes in the sampling errors. 3 refs

  7. Code of Conduct for Gas Marketers : rule made under part 3 of the Ontario Energy Board Act, 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-02

    Text of the code of conduct for gas marketers in Ontario is presented. This code sets the minimum standards under which a gas marketer may sell or offer to sell gas to a low-volume consumer, or act as an agent or broker with respect to the sale of gas. The document describes the standards and principles regarding: (1) fair marketing practices, (2) identification, (3) information to be maintained by a gas marketer, (4) confidentiality of consumer information, (5) conditions in offers, (6) contracts, (7) contract renewals, (8) assignment, sale and transfer contracts, (9) independent arms-length consumer complaints resolution process, and (10) penalties for breach of this code.

  8. Code of Conduct for Gas Marketers : rule made under part 3 of the Ontario Energy Board Act, 1998

    International Nuclear Information System (INIS)

    1999-01-01

    Text of the code of conduct for gas marketers in Ontario is presented. This code sets the minimum standards under which a gas marketer may sell or offer to sell gas to a low-volume consumer, or act as an agent or broker with respect to the sale of gas. The document describes the standards and principles regarding: (1) fair marketing practices, (2) identification, (3) information to be maintained by a gas marketer, (4) confidentiality of consumer information, (5) conditions in offers, (6) contracts, (7) contract renewals, (8) assignment, sale and transfer contracts, (9) independent arms-length consumer complaints resolution process, and (10) penalties for breach of this code

  9. Parallel processing of Monte Carlo code MCNP for particle transport problem

    Energy Technology Data Exchange (ETDEWEB)

    Higuchi, Kenji; Kawasaki, Takuji

    1996-06-01

    It is possible to vectorize or parallelize Monte Carlo codes (MC code) for photon and neutron transport problem, making use of independency of the calculation for each particle. Applicability of existing MC code to parallel processing is mentioned. As for parallel computer, we have used both vector-parallel processor and scalar-parallel processor in performance evaluation. We have made (i) vector-parallel processing of MCNP code on Monte Carlo machine Monte-4 with four vector processors, (ii) parallel processing on Paragon XP/S with 256 processors. In this report we describe the methodology and results for parallel processing on two types of parallel or distributed memory computers. In addition, we mention the evaluation of parallel programming environments for parallel computers used in the present work as a part of the work developing STA (Seamless Thinking Aid) Basic Software. (author)

  10. Software quality and process improvement in scientific simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Ambrosiano, J.; Webster, R. [Los Alamos National Lab., NM (United States)

    1997-11-01

    This report contains viewgraphs on the quest to develope better simulation code quality through process modeling and improvement. This study is based on the experience of the authors and interviews with ten subjects chosen from simulation code development teams at LANL. This study is descriptive rather than scientific.

  11. Coded Ultrasound for Blood Flow Estimation Using Subband Processing

    DEFF Research Database (Denmark)

    Gran, Fredrik; Udesen, Jesper; Nielsen, Michael Bachamnn

    2008-01-01

    the excitation signal is broadband and has good spatial resolution after pulse compression. This means that time can be saved by using the same data for B-mode imaging and blood flow estimation. Two different coding schemes are used in this paper, Barker codes and Golay codes. The performance of the codes......This paper investigates the use of coded excitation for blood flow estimation in medical ultrasound. Traditional autocorrelation estimators use narrow-band excitation signals to provide sufficient signal-to-noise-ratio (SNR) and velocity estimation performance. In this paper, broadband coded...... signals are used to increase SNR, followed by subband processing. The received broadband signal is filtered using a set of narrow-band filters. Estimating the velocity in each of the bands and averaging the results yields better performance compared with what would be possible when transmitting a narrow...

  12. Algebraic and stochastic coding theory

    CERN Document Server

    Kythe, Dave K

    2012-01-01

    Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.

  13. Processes of code status transitions in hospitalized patients with advanced cancer.

    Science.gov (United States)

    El-Jawahri, Areej; Lau-Min, Kelsey; Nipp, Ryan D; Greer, Joseph A; Traeger, Lara N; Moran, Samantha M; D'Arpino, Sara M; Hochberg, Ephraim P; Jackson, Vicki A; Cashavelly, Barbara J; Martinson, Holly S; Ryan, David P; Temel, Jennifer S

    2017-12-15

    Although hospitalized patients with advanced cancer have a low chance of surviving cardiopulmonary resuscitation (CPR), the processes by which they change their code status from full code to do not resuscitate (DNR) are unknown. We conducted a mixed-methods study on a prospective cohort of hospitalized patients with advanced cancer. Two physicians used a consensus-driven medical record review to characterize processes that led to code status order transitions from full code to DNR. In total, 1047 hospitalizations were reviewed among 728 patients. Admitting clinicians did not address code status in 53% of hospitalizations, resulting in code status orders of "presumed full." In total, 275 patients (26.3%) transitioned from full code to DNR, and 48.7% (134 of 275 patients) of those had an order of "presumed full" at admission; however, upon further clarification, the patients expressed that they had wished to be DNR before the hospitalization. We identified 3 additional processes leading to order transition from full code to DNR acute clinical deterioration (15.3%), discontinuation of cancer-directed therapy (17.1%), and education about the potential harms/futility of CPR (15.3%). Compared with discontinuing therapy and education, transitions because of acute clinical deterioration were associated with less patient involvement (P = .002), a shorter time to death (P cancer were because of full code orders in patients who had a preference for DNR before hospitalization. Transitions due of acute clinical deterioration were associated with less patient engagement and a higher likelihood of inpatient death. Cancer 2017;123:4895-902. © 2017 American Cancer Society. © 2017 American Cancer Society.

  14. Streamline processing of discrete nuclear spectra by means of authoregularized iteration process (the KOLOBOK code)

    International Nuclear Information System (INIS)

    Gadzhokov, V.; Penev, I.; Aleksandrov, L.

    1979-01-01

    A brief description of the KOLOBOK computer code designed for streamline processing of discrete nuclear spectra with symmetric Gaussian shape of the single line on computers of the ES series, models 1020 and above, is given. The program solves the stream of discrete-spectrometry generated nonlinear problems by means of authoregularized iteration process. The Fortran-4 text of the code is reported in an Appendix

  15. Colors and geometric forms in the work process information coding

    Directory of Open Access Journals (Sweden)

    Čizmić Svetlana

    2006-01-01

    Full Text Available The aim of the research was to establish the meaning of the colors and geometric shapes in transmitting information in the work process. The sample of 100 students connected 50 situations which could be associated with regular tasks in the work process with 12 colors and 4 geometric forms in previously chosen color. Based on chosen color-geometric shape-situation regulation, the idea of the research was to find out regularities in coding of information and to examine if those regularities can provide meaningful data assigned to each individual code and to explain which codes are better and applicable represents of examined situations.

  16. Linking CATHENA with other computer codes through a remote process

    Energy Technology Data Exchange (ETDEWEB)

    Vasic, A.; Hanna, B.N.; Waddington, G.M. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada); Sabourin, G. [Atomic Energy of Canada Limited, Montreal, Quebec (Canada); Girard, R. [Hydro-Quebec, Montreal, Quebec (Canada)

    2005-07-01

    'Full text:' CATHENA (Canadian Algorithm for THErmalhydraulic Network Analysis) is a computer code developed by Atomic Energy of Canada Limited (AECL). The code uses a transient, one-dimensional, two-fluid representation of two-phase flow in piping networks. CATHENA is used primarily for the analysis of postulated upset conditions in CANDU reactors; however, the code has found a wider range of applications. In the past, the CATHENA thermalhydraulics code included other specialized codes, i.e. ELOCA and the Point LEPreau CONtrol system (LEPCON) as callable subroutine libraries. The combined program was compiled and linked as a separately named code. This code organizational process is not suitable for independent development, maintenance, validation and version tracking of separate computer codes. The alternative solution to provide code development independence is to link CATHENA to other computer codes through a Parallel Virtual Machine (PVM) interface process. PVM is a public domain software package, developed by Oak Ridge National Laboratory and enables a heterogeneous collection of computers connected by a network to be used as a single large parallel machine. The PVM approach has been well accepted by the global computing community and has been used successfully for solving large-scale problems in science, industry, and business. Once development of the appropriate interface for linking independent codes through PVM is completed, future versions of component codes can be developed, distributed separately and coupled as needed by the user. This paper describes the coupling of CATHENA to the ELOCA-IST and the TROLG2 codes through a PVM remote process as an illustration of possible code connections. ELOCA (Element Loss Of Cooling Analysis) is the Industry Standard Toolset (IST) code developed by AECL to simulate the thermo-mechanical response of CANDU fuel elements to transient thermalhydraulics boundary conditions. A separate ELOCA driver program

  17. Linking CATHENA with other computer codes through a remote process

    International Nuclear Information System (INIS)

    Vasic, A.; Hanna, B.N.; Waddington, G.M.; Sabourin, G.; Girard, R.

    2005-01-01

    'Full text:' CATHENA (Canadian Algorithm for THErmalhydraulic Network Analysis) is a computer code developed by Atomic Energy of Canada Limited (AECL). The code uses a transient, one-dimensional, two-fluid representation of two-phase flow in piping networks. CATHENA is used primarily for the analysis of postulated upset conditions in CANDU reactors; however, the code has found a wider range of applications. In the past, the CATHENA thermalhydraulics code included other specialized codes, i.e. ELOCA and the Point LEPreau CONtrol system (LEPCON) as callable subroutine libraries. The combined program was compiled and linked as a separately named code. This code organizational process is not suitable for independent development, maintenance, validation and version tracking of separate computer codes. The alternative solution to provide code development independence is to link CATHENA to other computer codes through a Parallel Virtual Machine (PVM) interface process. PVM is a public domain software package, developed by Oak Ridge National Laboratory and enables a heterogeneous collection of computers connected by a network to be used as a single large parallel machine. The PVM approach has been well accepted by the global computing community and has been used successfully for solving large-scale problems in science, industry, and business. Once development of the appropriate interface for linking independent codes through PVM is completed, future versions of component codes can be developed, distributed separately and coupled as needed by the user. This paper describes the coupling of CATHENA to the ELOCA-IST and the TROLG2 codes through a PVM remote process as an illustration of possible code connections. ELOCA (Element Loss Of Cooling Analysis) is the Industry Standard Toolset (IST) code developed by AECL to simulate the thermo-mechanical response of CANDU fuel elements to transient thermalhydraulics boundary conditions. A separate ELOCA driver program starts, ends

  18. Calculation code revised MIXSET for Purex process

    International Nuclear Information System (INIS)

    Gonda, Kozo; Oka, Koichiro; Fukuda, Shoji.

    1979-02-01

    Revised MIXSET is a FORTRAN IV calculation code developed to simulate steady and transient behaviors of the Purex extraction process and calculate the optimum operating condition of the process. Revised MIXSET includes all the functions of MIXSET code as shown below. a) Maximum chemical system of eight components can be handled with or without mutual dependence of the distribution of components. b) The flowrate and concentration of feed can be renewed successively at any state, transient or steady, for searching optimum operating conditions. c) Optimum inputs of feed concentrations and flowrates can be calculated to satisfy both of specification and recovery rate of a product. d) Radioactive decay reactions can be handled on each component. Besides these functions, the following chemical reactions concerned in Purex process are newly-included in Revised MIXSET code and the quantitative changes of components such as H + , U(IV), U(VI), Pu(III), Pu(IV), NH 2 OH, N 2 H 4 can be simulated. 1st Gr. (i) reduction of Pu(IV); U 4+ + 2Pu 4+ + 2H 2 O → UO 2 2+ + 2Pu 3+ + 4H + . (ii) oxidation of Pu(III); 2Pu 3+ + 3H + + NO 3 - → 2Pu 4+ + HNO 2 + H 2 O. (iii) oxidation of U(IV); U 4+ + NO 3 - + H 2 O → UO 2 2+ + H + + HNO 2 2U 4+ + O 2 + 2H 2 O → 2UO 2 2+ + 4H + . (iv) decomposition of HNO 2 ; HNO 2 + N 2 H 5 + → HN 3 + 2H 2 O + H + . (author)

  19. A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage scheme.

    Science.gov (United States)

    Pongpirul, Krit; Walker, Damian G; Winch, Peter J; Robinson, Courtland

    2011-04-08

    In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large), location (urban/rural), and type (public/private). Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1) Discharge Summarization, 2) Completeness Checking, 3) Diagnosis and Procedure Coding, 4) Code Checking, 5) Relative Weight Challenging, 6) Coding Report, and 7) Internal Audit. The hospital coding practice can be affected by at least five main factors: 1) Internal Dynamics, 2) Management Context, 3) Financial Dependency, 4) Resource and Capacity, and 5) External Factors. Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors.

  20. Effect of difference between group constants processed by codes TIMS and ETOX on integral quantities

    International Nuclear Information System (INIS)

    Takano, Hideki; Ishiguro, Yukio; Matsui, Yasushi.

    1978-06-01

    Group constants of 235 U, 238 U, 239 Pu, 240 Pu and 241 Pu have been produced with the processing code TIMS using the evaluated nuclear data of JENDL-1. The temperature and composition dependent self-shielding factors have been calculated for the two cases with and without considering mutual interference resonant nuclei. By using the group constants set produced by the TIMS code, the integral quantities, i.e. multiplication factor, Na-void reactivity effect and Doppler reactivity effect, are calculated and compared with those calculated with the use of the cross sections set produced by the ETOX code to evaluate accuracy of the approximate calculation method in ETOX. There is much difference in self-shielding factor in each energy group between the two codes. For the fast reactor assemblies under study, however, the integral quantities calculated with these two sets are in good agreement with each other, because of eventual cancelation of errors. (auth.)

  1. Offer and acceptance under the Russian Civil Code

    OpenAIRE

    Musin, Valery

    2013-01-01

    The article deals with a procedure of entering into a contract under Russian civil law both at the domestic and foreign markets. An offer and an acceptance are considered in the light or relevant provisions of the Russian Civil Codes of 1922, 1964 and that currently effective as compared with rules of the UN Convention on Contracts for the International Sale of Goods 1980 and INIDROIT Principles of International Commercial Contracts 2010.

  2. Offer and Acceptance under the Russian Civil Code

    Directory of Open Access Journals (Sweden)

    Valery Musin

    2013-01-01

    Full Text Available The article deals with a procedure of entering into a contract under Russian civil law both at the domestic and foreign markets. An offer and an acceptance are considered in the light or relevant provisions of the Russian Civil Codes of 1922, 1964 and that currently effective as compared with rules of the UN Convention on Contracts for the International Sale of Goods 1980 and INIDROIT Principles of International Commercial Contracts 2010.

  3. SCAMPI: A code package for cross-section processing

    International Nuclear Information System (INIS)

    Parks, C.V.; Petrie, L.M.; Bowman, S.M.; Broadhead, B.L.; Greene, N.M.; White, J.E.

    1996-01-01

    The SCAMPI code package consists of a set of SCALE and AMPX modules that have been assembled to facilitate user needs for preparation of problem-specific, multigroup cross-section libraries. The function of each module contained in the SCANTI code package is discussed, along with illustrations of their use in practical analyses. Ideas are presented for future work that can enable one-step processing from a fine-group, problem-independent library to a broad-group, problem-specific library ready for a shielding analysis

  4. SCAMPI: A code package for cross-section processing

    Energy Technology Data Exchange (ETDEWEB)

    Parks, C.V.; Petrie, L.M.; Bowman, S.M.; Broadhead, B.L.; Greene, N.M.; White, J.E.

    1996-04-01

    The SCAMPI code package consists of a set of SCALE and AMPX modules that have been assembled to facilitate user needs for preparation of problem-specific, multigroup cross-section libraries. The function of each module contained in the SCANTI code package is discussed, along with illustrations of their use in practical analyses. Ideas are presented for future work that can enable one-step processing from a fine-group, problem-independent library to a broad-group, problem-specific library ready for a shielding analysis.

  5. ERRORJ. Covariance processing code system for JENDL. Version 2

    International Nuclear Information System (INIS)

    Chiba, Gou

    2003-09-01

    ERRORJ is the covariance processing code system for Japanese Evaluated Nuclear Data Library (JENDL) that can produce group-averaged covariance data to apply it to the uncertainty analysis of nuclear characteristics. ERRORJ can treat the covariance data for cross sections including resonance parameters as well as angular distributions and energy distributions of secondary neutrons which could not be dealt with by former covariance processing codes. In addition, ERRORJ can treat various forms of multi-group cross section and produce multi-group covariance file with various formats. This document describes an outline of ERRORJ and how to use it. (author)

  6. A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage Scheme

    Directory of Open Access Journals (Sweden)

    Winch Peter J

    2011-04-01

    Full Text Available Abstract Background In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Methods Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large, location (urban/rural, and type (public/private. Results Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1 Discharge Summarization, 2 Completeness Checking, 3 Diagnosis and Procedure Coding, 4 Code Checking, 5 Relative Weight Challenging, 6 Coding Report, and 7 Internal Audit. The hospital coding practice can be affected by at least five main factors: 1 Internal Dynamics, 2 Management Context, 3 Financial Dependency, 4 Resource and Capacity, and 5 External Factors. Conclusions Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors.

  7. A proposal for further integration of the cyanobacteria under the Bacteriological Code.

    Science.gov (United States)

    Oren, Aharon

    2004-09-01

    This taxonomic note reviews the present status of the nomenclature of the cyanobacteria under the Bacteriological Code. No more than 13 names of cyanobacterial species have been proposed so far in the International Journal of Systematic and Evolutionary Microbiology (IJSEM)/International Journal of Systematic Bacteriology (IJSB), and of these only five are validly published. The cyanobacteria (Cyanophyta, blue-green algae) are also named under the Botanical Code, and the dual nomenclature system causes considerable confusion. This note calls for a more intense involvement of the International Committee on Systematics of Prokaryotes (ICSP), its Judicial Commission and its Subcommittee on the Taxonomy of Photosynthetic Prokaryotes in the nomenclature of the cyanobacteria under the Bacteriological Code. The establishment of minimal standards for the description of new species and genera should be encouraged in a way that will be acceptable to the botanical authorities as well. This should be followed by the publication of an 'Approved List of Names of Cyanobacteria' in IJSEM. The ultimate goal is to achieve a consensus nomenclature that is acceptable both to bacteriologists and to botanists, anticipating the future implementation of a universal 'Biocode' that would regulate the nomenclature of all organisms living on Earth.

  8. Developing improved MD codes for understanding processive cellulases

    International Nuclear Information System (INIS)

    Crowley, M F; Nimlos, M R; Himmel, M E; Uberbacher, E C; Iii, C L Brooks; Walker, R C

    2008-01-01

    The mechanism of action of cellulose-degrading enzymes is illuminated through a multidisciplinary collaboration that uses molecular dynamics (MD) simulations and expands the capabilities of MD codes to allow simulations of enzymes and substrates on petascale computational facilities. There is a class of glycoside hydrolase enzymes called cellulases that are thought to decrystallize and processively depolymerize cellulose using biochemical processes that are largely not understood. Understanding the mechanisms involved and improving the efficiency of this hydrolysis process through computational models and protein engineering presents a compelling grand challenge. A detailed understanding of cellulose structure, dynamics and enzyme function at the molecular level is required to direct protein engineers to the right modifications or to understand if natural thermodynamic or kinetic limits are in play. Much can be learned about processivity by conducting carefully designed molecular dynamics (MD) simulations of the binding and catalytic domains of cellulases with various substrate configurations, solvation models and thermodynamic protocols. Most of these numerical experiments, however, will require significant modification of existing code and algorithms in order to efficiently use current (terascale) and future (petascale) hardware to the degree of parallelism necessary to simulate a system of the size proposed here. This work will develop MD codes that can efficiently use terascale and petascale systems, not just for simple classical MD simulations, but also for more advanced methods, including umbrella sampling with complex restraints and reaction coordinates, transition path sampling, steered molecular dynamics, and quantum mechanical/molecular mechanical simulations of systems the size of cellulose degrading enzymes acting on cellulose

  9. The intercomparison of aerosol codes

    International Nuclear Information System (INIS)

    Dunbar, I.H.; Fermandjian, J.; Gauvain, J.

    1988-01-01

    The behavior of aerosols in a reactor containment vessel following a severe accident could be an important determinant of the accident source term to the environment. Various processes result in the deposition of the aerosol onto surfaces within the containment, from where they are much less likely to be released. Some of these processes are very sensitive to particle size, so it is important to model the aerosol growth processes: agglomeration and condensation. A number of computer codes have been written to model growth and deposition processes. They have been tested against each other in a series of code comparison exercises. These exercises have investigated sensitivities to physical and numerical assumptions and have also proved a useful means of quality control for the codes. Various exercises in which code predictions are compared with experimental results are now under way

  10. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    Science.gov (United States)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  11. Kombucha brewing under the Food and Drug Administration model Food Code: risk analysis and processing guidance.

    Science.gov (United States)

    Nummer, Brian A

    2013-11-01

    Kombucha is a fermented beverage made from brewed tea and sugar. The taste is slightly sweet and acidic and it may have residual carbon dioxide. Kombucha is consumed in many countries as a health beverage and it is gaining in popularity in the U.S. Consequently, many retailers and food service operators are seeking to brew this beverage on site. As a fermented beverage, kombucha would be categorized in the Food and Drug Administration model Food Code as a specialized process and would require a variance with submission of a food safety plan. This special report was created to assist both operators and regulators in preparing or reviewing a kombucha food safety plan.

  12. Real-time Color Codes for Assessing Learning Process

    OpenAIRE

    Dzelzkalēja, L; Kapenieks, J

    2016-01-01

    Effective assessment is an important way for improving the learning process. There are existing guidelines for assessing the learning process, but they lack holistic digital knowledge society considerations. In this paper the authors propose a method for real-time evaluation of students’ learning process and, consequently, for quality evaluation of teaching materials both in the classroom and in the distance learning environment. The main idea of the proposed Color code method (CCM) is to use...

  13. Case studies in Gaussian process modelling of computer codes

    International Nuclear Information System (INIS)

    Kennedy, Marc C.; Anderson, Clive W.; Conti, Stefano; O'Hagan, Anthony

    2006-01-01

    In this paper we present a number of recent applications in which an emulator of a computer code is created using a Gaussian process model. Tools are then applied to the emulator to perform sensitivity analysis and uncertainty analysis. Sensitivity analysis is used both as an aid to model improvement and as a guide to how much the output uncertainty might be reduced by learning about specific inputs. Uncertainty analysis allows us to reflect output uncertainty due to unknown input parameters, when the finished code is used for prediction. The computer codes themselves are currently being developed within the UK Centre for Terrestrial Carbon Dynamics

  14. Genetic Code Analysis Toolkit: A novel tool to explore the coding properties of the genetic code and DNA sequences

    Science.gov (United States)

    Kraljić, K.; Strüngmann, L.; Fimmel, E.; Gumbel, M.

    2018-01-01

    The genetic code is degenerated and it is assumed that redundancy provides error detection and correction mechanisms in the translation process. However, the biological meaning of the code's structure is still under current research. This paper presents a Genetic Code Analysis Toolkit (GCAT) which provides workflows and algorithms for the analysis of the structure of nucleotide sequences. In particular, sets or sequences of codons can be transformed and tested for circularity, comma-freeness, dichotomic partitions and others. GCAT comes with a fertile editor custom-built to work with the genetic code and a batch mode for multi-sequence processing. With the ability to read FASTA files or load sequences from GenBank, the tool can be used for the mathematical and statistical analysis of existing sequence data. GCAT is Java-based and provides a plug-in concept for extensibility. Availability: Open source Homepage:http://www.gcat.bio/

  15. The current status of cyanobacterial nomenclature under the "prokaryotic" and the "botanical" code.

    Science.gov (United States)

    Oren, Aharon; Ventura, Stefano

    2017-10-01

    Cyanobacterial taxonomy developed in the botanical world because Cyanobacteria/Cyanophyta have traditionally been identified as algae. However, they possess a prokaryotic cell structure, and phylogenetically they belong to the Bacteria. This caused nomenclature problems as the provisions of the International Code of Nomenclature for algae, fungi, and plants (ICN; the "Botanical Code") differ from those of the International Code of Nomenclature of Prokaryotes (ICNP; the "Prokaryotic Code"). While the ICN recognises names validly published under the ICNP, Article 45(1) of the ICN has not yet been reciprocated in the ICNP. Different solutions have been proposed to solve the current problems. In 2012 a Special Committee on the harmonisation of the nomenclature of Cyanobacteria was appointed, but its activity has been minimal. Two opposing proposals to regulate cyanobacterial nomenclature were recently submitted, one calling for deletion of the cyanobacteria from the groups of organisms whose nomenclature is regulated by the ICNP, the second to consistently apply the rules of the ICNP to all cyanobacteria. Following a general overview of the current status of cyanobacterial nomenclature under the two codes we present five case studies of genera for which nomenclatural aspects have been discussed in recent years: Microcystis, Planktothrix, Halothece, Gloeobacter and Nostoc.

  16. Channel modeling, signal processing and coding for perpendicular magnetic recording

    Science.gov (United States)

    Wu, Zheng

    With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by

  17. Annotating long intergenic non-coding RNAs under artificial selection during chicken domestication.

    Science.gov (United States)

    Wang, Yun-Mei; Xu, Hai-Bo; Wang, Ming-Shan; Otecko, Newton Otieno; Ye, Ling-Qun; Wu, Dong-Dong; Zhang, Ya-Ping

    2017-08-15

    Numerous biological functions of long intergenic non-coding RNAs (lincRNAs) have been identified. However, the contribution of lincRNAs to the domestication process has remained elusive. Following domestication from their wild ancestors, animals display substantial changes in many phenotypic traits. Therefore, it is possible that diverse molecular drivers play important roles in this process. We analyzed 821 transcriptomes in this study and annotated 4754 lincRNA genes in the chicken genome. Our population genomic analysis indicates that 419 lincRNAs potentially evolved during artificial selection related to the domestication of chicken, while a comparative transcriptomic analysis identified 68 lincRNAs that were differentially expressed under different conditions. We also found 47 lincRNAs linked to special phenotypes. Our study provides a comprehensive view of the genome-wide landscape of lincRNAs in chicken. This will promote a better understanding of the roles of lincRNAs in domestication, and the genetic mechanisms associated with the artificial selection of domestic animals.

  18. Development of system analysis code for pyrochemical process using molten salt electrorefining

    International Nuclear Information System (INIS)

    Tozawa, K.; Matsumoto, T.; Kakehi, I.

    2000-04-01

    This report describes accomplishment of development of a cathode processor calculation code to simulate the mass and heat transfer phenomena with the distillation process and development of an analytical model for cooling behavior of the pyrochemical process cell on personal computers. The pyrochemical process using molten salt electrorefining would introduce new technologies for new fuels of particle oxide, particle nitride and metallic fuels. The cathode processor calculation code with distillation process was developed. A code validation calculation has been conducted on the basic of the benchmark problem for natural convection in a square cavity. Results by using the present code agreed well for the velocity-temperature fields, the maximum velocity and its location with the benchmark solution published in a paper. The functions have been added to advance the reality in simulation and to increase the efficiency in utilization. The test run has been conducted using the code with the above modification for an axisymmetric enclosed vessel simulating a cathode processor, and the capability of the distillation process simulation with the code has been confirmed. An analytical model for cooling behavior of the pyrochemical process cell was developed. The analytical model was selected by comparing benchmark analysis with detailed analysis on engineering workstation. Flow and temperature distributions were confirmed by the result of steady state analysis. In the result of transient cooling analysis, an initial transient peak of temperature occurred at balanced heat condition in the steady-state analysis. Final gas temperature distribution was dependent on gas circulation flow in transient condition. Then there were different final gas temperature distributions on the basis of the result of steady-state analysis. This phenomenon has a potential for it's own metastable condition. Therefore it was necessary to design gas cooling flow pattern without cooling gas circulation

  19. Probabilistic Design in a Sheet Metal Stamping Process under Failure Analysis

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao, Jian; Chen, Wei; Xia, Z. Cedric

    2005-01-01

    Sheet metal stamping processes have been widely implemented in many industries due to its repeatability and productivity. In general, the simulations for a sheet metal forming process involve nonlinearity, complex material behavior and tool-material interaction. Instabilities in terms of tearing and wrinkling are major concerns in many sheet metal stamping processes. In this work, a sheet metal stamping process of a mild steel for a wheelhouse used in automobile industry is studied by using an explicit nonlinear finite element code and incorporating failure analysis (tearing and wrinkling) and design under uncertainty. Margins of tearing and wrinkling are quantitatively defined via stress-based criteria for system-level design. The forming process utilizes drawbeads instead of using the blank holder force to restrain the blank. The main parameters of interest in this work are friction conditions, drawbead configurations, sheet metal properties, and numerical errors. A robust design model is created to conduct a probabilistic design, which is made possible for this complex engineering process via an efficient uncertainty propagation technique. The method called the weighted three-point-based method estimates the statistical characteristics (mean and variance) of the responses of interest (margins of failures), and provide a systematic approach in designing a sheet metal forming process under the framework of design under uncertainty

  20. Introduction of SCIENCE code package

    International Nuclear Information System (INIS)

    Lu Haoliang; Li Jinggang; Zhu Ya'nan; Bai Ning

    2012-01-01

    The SCIENCE code package is a set of neutronics tools based on 2D assembly calculations and 3D core calculations. It is made up of APOLLO2F, SMART and SQUALE and used to perform the nuclear design and loading pattern analysis for the reactors on operation or under construction of China Guangdong Nuclear Power Group. The purpose of paper is to briefly present the physical and numerical models used in each computation codes of the SCIENCE code pack age, including the description of the general structure of the code package, the coupling relationship of APOLLO2-F transport lattice code and SMART core nodal code, and the SQUALE code used for processing the core maps. (authors)

  1. SSYST, a code-system for analysing transient LWR fuel rod behaviour under off-normal conditions

    International Nuclear Information System (INIS)

    Borgwaldt, H.; Gulden, W.

    1983-01-01

    SSYST is a code-system for analysing transient fuel rod behaviour under off-normal conditions, developed conjointly by the Institut fuer Kernenergetik und Energiesysteme (IKE), Stuttgart, and Kernforschungszentrum Karlsruhe (KfK) under contract of Projek Nukleare Sicherheit (PNS) at KfK. The main differences between SSYST and similar codes are (1) an open-ended modular code organisation, and (2) a preference for simple models, wherever possible. While the first feature makes SSYST a very flexible tool, easily adapted to changing requirements, the second feature leads to short execution times. The analysis of transient rod behaviour under LOCA boundary conditions takes 2 min cpu-time (IBM-3033), so that extensive parametric studies become possible. This paper gives an outline of the overall code organisation and a general overview of the physical models implemented. Besides explaining the routine application of SSYST in the analysis of loss-of-coolant accidents, examples are given of special applications which have led to a satisfactory understanding of the decisive influence of deviations from rotational symmetry on the fuel rod perimeter. (author)

  2. SSYST: A code-system for analyzing transient LWR fuel rod behaviour under off-normal conditions

    International Nuclear Information System (INIS)

    Borgwaldt, H.; Gulden, W.

    1983-01-01

    SSYST is a code-system for analyzing transient fuel rod behaviour under off-normal conditions, developed conjointly by the Institut fur Kernenergetik und Energiesysteme (IKE), Stuttgart, and Kernforschungszentrum Karlsruhe (KfK) under contract of Projekt Nukleare Sicherheit (PNS) at KfK. The main differences between SSYST and similar codes are an open-ended modular code organization, and a preference for simple models, wherever possible. While the first feature makes SSYST a very flexible tool, easily adapted to changing requirements, the second feature leads to short execution times. The analysis of transient rod behaviour under LOCA boundary conditions takes 2 min cpu-time (IBM-3033), so that extensive parametric studies become possible. This paper gives an outline of the overall code organisation and a general overview of the physical models implemented. Besides explaining the routine application of SSYST in the analysis of loss-of-coolant accidents, examples are given of special applications which have led to a satisfactory understanding of the decisive influence of deviations from rotational symmetry on the fuel rod perimeter

  3. Video processing for human perceptual visual quality-oriented video coding.

    Science.gov (United States)

    Oh, Hyungsuk; Kim, Wonha

    2013-04-01

    We have developed a video processing method that achieves human perceptual visual quality-oriented video coding. The patterns of moving objects are modeled by considering the limited human capacity for spatial-temporal resolution and the visual sensory memory together, and an online moving pattern classifier is devised by using the Hedge algorithm. The moving pattern classifier is embedded in the existing visual saliency with the purpose of providing a human perceptual video quality saliency model. In order to apply the developed saliency model to video coding, the conventional foveation filtering method is extended. The proposed foveation filter can smooth and enhance the video signals locally, in conformance with the developed saliency model, without causing any artifacts. The performance evaluation results confirm that the proposed video processing method shows reliable improvements in the perceptual quality for various sequences and at various bandwidths, compared to existing saliency-based video coding methods.

  4. PUFF-III: A Code for Processing ENDF Uncertainty Data Into Multigroup Covariance Matrices

    International Nuclear Information System (INIS)

    Dunn, M.E.

    2000-01-01

    PUFF-III is an extension of the previous PUFF-II code that was developed in the 1970s and early 1980s. The PUFF codes process the Evaluated Nuclear Data File (ENDF) covariance data and generate multigroup covariance matrices on a user-specified energy grid structure. Unlike its predecessor, PUFF-III can process the new ENDF/B-VI data formats. In particular, PUFF-III has the capability to process the spontaneous fission covariances for fission neutron multiplicity. With regard to the covariance data in File 33 of the ENDF system, PUFF-III has the capability to process short-range variance formats, as well as the lumped reaction covariance data formats that were introduced in ENDF/B-V. In addition to the new ENDF formats, a new directory feature is now available that allows the user to obtain a detailed directory of the uncertainty information in the data files without visually inspecting the ENDF data. Following the correlation matrix calculation, PUFF-III also evaluates the eigenvalues of each correlation matrix and tests each matrix for positive definiteness. Additional new features are discussed in the manual. PUFF-III has been developed for implementation in the AMPX code system, and several modifications were incorporated to improve memory allocation tasks and input/output operations. Consequently, the resulting code has a structure that is similar to other modules in the AMPX code system. With the release of PUFF-III, a new and improved covariance processing code is available to process ENDF covariance formats through Version VI

  5. Study on the properties of infrared wavefront coding athermal system under several typical temperature gradient distributions

    Science.gov (United States)

    Cai, Huai-yu; Dong, Xiao-tong; Zhu, Meng; Huang, Zhan-hua

    2018-01-01

    Wavefront coding for athermal technique can effectively ensure the stability of the optical system imaging in large temperature range, as well as the advantages of compact structure and low cost. Using simulation method to analyze the properties such as PSF and MTF of wavefront coding athermal system under several typical temperature gradient distributions has directive function to characterize the working state of non-ideal temperature environment, and can effectively realize the system design indicators as well. In this paper, we utilize the interoperability of data between Solidworks and ZEMAX to simplify the traditional process of structure/thermal/optical integrated analysis. Besides, we design and build the optical model and corresponding mechanical model of the infrared imaging wavefront coding athermal system. The axial and radial temperature gradients of different degrees are applied to the whole system by using SolidWorks software, thus the changes of curvature, refractive index and the distance between the lenses are obtained. Then, we import the deformation model to ZEMAX for ray tracing, and obtain the changes of PSF and MTF in optical system. Finally, we discuss and evaluate the consistency of the PSF (MTF) of the wavefront coding athermal system and the image restorability, which provides the basis and reference for the optimal design of the wavefront coding athermal system. The results show that the adaptability of single material infrared wavefront coding athermal system to axial temperature gradient can reach the upper limit of temperature fluctuation of 60°C, which is much higher than that of radial temperature gradient.

  6. The role of the PIRT process in identifying code improvements and executing code development

    International Nuclear Information System (INIS)

    Wilson, G.E.; Boyack, B.E.

    1997-01-01

    In September 1988, the USNRC issued a revised ECCS rule for light water reactors that allows, as an option, the use of best estimate (BE) plus uncertainty methods in safety analysis. The key feature of this licensing option relates to quantification of the uncertainty in the determination that an NPP has a low probability of violating the safety criteria specified in 10 CFR 50. To support the 1988 licensing revision, the USNRC and its contractors developed the CSAU evaluation methodology to demonstrate the feasibility of the BE plus uncertainty approach. The PIRT process, Step 3 in the CSAU methodology, was originally formulated to support the BE plus uncertainty licensing option as executed in the CSAU approach to safety analysis. Subsequent work has shown the PIRT process to be a much more powerful tool than conceived in its original form. Through further development and application, the PIRT process has shown itself to be a robust means to establish safety analysis computer code phenomenological requirements in their order of importance to such analyses. Used early in research directed toward these objectives, PIRT results also provide the technical basis and cost effective organization for new experimental programs needed to improve the safety analysis codes for new applications. The primary purpose of this paper is to describe the generic PIRT process, including typical and common illustrations from prior applications. The secondary objective is to provide guidance to future applications of the process to help them focus, in a graded approach, on systems, components, processes and phenomena that have been common in several prior applications

  7. 77 FR 17460 - Multistakeholder Process To Develop Consumer Data Privacy Codes of Conduct

    Science.gov (United States)

    2012-03-26

    ..., 2012, NTIA requested public comments on (1) which consumer data privacy issues should be the focus of.... 120214135-2203-02] RIN 0660-XA27 Multistakeholder Process To Develop Consumer Data Privacy Codes of Conduct... request for public comments on the multistakeholder process to develop consumer data privacy codes of...

  8. ENDF/B Pre-Processing Codes: Implementing and testing on a Personal Computer

    International Nuclear Information System (INIS)

    McLaughlin, P.K.

    1987-05-01

    This document describes the contents of the diskettes containing the ENDF/B Pre-Processing codes by D.E. Cullen, and example data for use in implementing and testing these codes on a Personal Computer of the type IBM-PC/AT. Upon request the codes are available from the IAEA Nuclear Data Section, free of charge, on a series of 7 diskettes. (author)

  9. Tech-X Corporation releases simulation code for solving complex problems in plasma physics : VORPAL code provides a robust environment for simulating plasma processes in high-energy physics, IC fabrications and material processing applications

    CERN Multimedia

    2005-01-01

    Tech-X Corporation releases simulation code for solving complex problems in plasma physics : VORPAL code provides a robust environment for simulating plasma processes in high-energy physics, IC fabrications and material processing applications

  10. 76 FR 37034 - Certain Employee Remuneration in Excess of $1,000,000 Under Internal Revenue Code Section 162(m)

    Science.gov (United States)

    2011-06-24

    ... Certain Employee Remuneration in Excess of $1,000,000 Under Internal Revenue Code Section 162(m) AGENCY... remuneration in excess of $1,000,000 under the Internal Revenue Code (Code). The proposed regulations clarify... stock options, it is intended that the directors may retain discretion as to the exact number of options...

  11. Comparative study of Monte Carlo particle transport code PHITS and nuclear data processing code NJOY for recoil cross section spectra under neutron irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Iwamoto, Yosuke, E-mail: iwamoto.yosuke@jaea.go.jp; Ogawa, Tatsuhiko

    2017-04-01

    Because primary knock-on atoms (PKAs) create point defects and clusters in materials that are irradiated with neutrons, it is important to validate the calculations of recoil cross section spectra that are used to estimate radiation damage in materials. Here, the recoil cross section spectra of fission- and fusion-relevant materials were calculated using the Event Generator Mode (EGM) of the Particle and Heavy Ion Transport code System (PHITS) and also using the data processing code NJOY2012 with the nuclear data libraries TENDL2015, ENDF/BVII.1, and JEFF3.2. The heating number, which is the integral of the recoil cross section spectra, was also calculated using PHITS-EGM and compared with data extracted from the ACE files of TENDL2015, ENDF/BVII.1, and JENDL4.0. In general, only a small difference was found between the PKA spectra of PHITS + TENDL2015 and NJOY + TENDL2015. From analyzing the recoil cross section spectra extracted from the nuclear data libraries using NJOY2012, we found that the recoil cross section spectra were incorrect for {sup 72}Ge, {sup 75}As, {sup 89}Y, and {sup 109}Ag in the ENDF/B-VII.1 library, and for {sup 90}Zr and {sup 55}Mn in the JEFF3.2 library. From analyzing the heating number, we found that the data extracted from the ACE file of TENDL2015 for all nuclides were problematic in the neutron capture region because of incorrect data regarding the emitted gamma energy. However, PHITS + TENDL2015 can calculate PKA spectra and heating numbers correctly.

  12. Calculation code MIXSET for Purex process

    International Nuclear Information System (INIS)

    Gonda, Kozo; Fukuda, Shoji.

    1977-09-01

    MIXSET is a FORTRAN IV calculation code for Purex process that simulate the dynamic behavior of solvent extraction processes in mixer-settlers. Two options permit terminating dynamic phase by time or by achieving steady state. These options also permit continuing calculation successively using new inputs from a arbitrary phase. A third option permits artificial rapid close to steady state and a fourth option permits searching optimum input to satisfy both of specification and recovery rate of product. MIXSET handles maximum chemical system of eight components with or without mutual dependence of the distribution of the components. The chemical system in MIXSET includes chemical reactions and/or decaying reaction. Distribution data can be supplied by third-power polynominal equations or tables, and kinetic data by tables or given constants. The fluctuation of the interfacial level height in settler is converted into the flow rate changes of organic and aqueous stream to follow dynamic behavior of extraction process in detail. MIXSET can be applied to flowsheet study, start up and/or shut down procedure study and real time process management in countercurrent solvent extraction processes. (auth.)

  13. When Content Matters: The Role of Processing Code in Tactile Display Design.

    Science.gov (United States)

    Ferris, Thomas K; Sarter, Nadine

    2010-01-01

    The distribution of tasks and stimuli across multiple modalities has been proposed as a means to support multitasking in data-rich environments. Recently, the tactile channel and, more specifically, communication via the use of tactile/haptic icons have received considerable interest. Past research has examined primarily the impact of concurrent task modality on the effectiveness of tactile information presentation. However, it is not well known to what extent the interpretation of iconic tactile patterns is affected by another attribute of information: the information processing codes of concurrent tasks. In two driving simulation studies (n = 25 for each), participants decoded icons composed of either spatial or nonspatial patterns of vibrations (engaging spatial and nonspatial processing code resources, respectively) while concurrently interpreting spatial or nonspatial visual task stimuli. As predicted by Multiple Resource Theory, performance was significantly worse (approximately 5-10 percent worse) when the tactile icons and visual tasks engaged the same processing code, with the overall worst performance in the spatial-spatial task pairing. The findings from these studies contribute to an improved understanding of information processing and can serve as input to multidimensional quantitative models of timesharing performance. From an applied perspective, the results suggest that competition for processing code resources warrants consideration, alongside other factors such as the naturalness of signal-message mapping, when designing iconic tactile displays. Nonspatially encoded tactile icons may be preferable in environments which already rely heavily on spatial processing, such as car cockpits.

  14. A multi-level code for metallurgical effects in metal-forming processes

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, P.A.; Silling, S.A. [Sandia National Labs., Albuquerque, NM (United States). Computational Physics and Mechanics Dept.; Hughes, D.A.; Bammann, D.J.; Chiesa, M.L. [Sandia National Labs., Livermore, CA (United States)

    1997-08-01

    The authors present the final report on a Laboratory-Directed Research and Development (LDRD) project, A Multi-level Code for Metallurgical Effects in metal-Forming Processes, performed during the fiscal years 1995 and 1996. The project focused on the development of new modeling capabilities for simulating forging and extrusion processes that typically display phenomenology occurring on two different length scales. In support of model fitting and code validation, ring compression and extrusion experiments were performed on 304L stainless steel, a material of interest in DOE nuclear weapons applications.

  15. Coding strategies for cochlear implants under adverse environments

    Science.gov (United States)

    Tahmina, Qudsia

    Cochlear implants are electronic prosthetic devices that restores partial hearing in patients with severe to profound hearing loss. Although most coding strategies have significantly improved the perception of speech in quite listening conditions, there remains limitations on speech perception under adverse environments such as in background noise, reverberation and band-limited channels, and we propose strategies that improve the intelligibility of speech transmitted over the telephone networks, reverberated speech and speech in the presence of background noise. For telephone processed speech, we propose to examine the effects of adding low-frequency and high- frequency information to the band-limited telephone speech. Four listening conditions were designed to simulate the receiving frequency characteristics of telephone handsets. Results indicated improvement in cochlear implant and bimodal listening when telephone speech was augmented with high frequency information and therefore this study provides support for design of algorithms to extend the bandwidth towards higher frequencies. The results also indicated added benefit from hearing aids for bimodal listeners in all four types of listening conditions. Speech understanding in acoustically reverberant environments is always a difficult task for hearing impaired listeners. Reverberated sounds consists of direct sound, early reflections and late reflections. Late reflections are known to be detrimental to speech intelligibility. In this study, we propose a reverberation suppression strategy based on spectral subtraction to suppress the reverberant energies from late reflections. Results from listening tests for two reverberant conditions (RT60 = 0.3s and 1.0s) indicated significant improvement when stimuli was processed with SS strategy. The proposed strategy operates with little to no prior information on the signal and the room characteristics and therefore, can potentially be implemented in real-time CI

  16. Report number codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.N. (ed.)

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  17. Report number codes

    International Nuclear Information System (INIS)

    Nelson, R.N.

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name

  18. The role of the PIRT process in identifying code improvements and executing code development

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, G.E. [Idaho National Engineering Lab., Idaho Falls, ID (United States); Boyack, B.E. [Los Alamos National Lab., NM (United States)

    1997-07-01

    In September 1988, the USNRC issued a revised ECCS rule for light water reactors that allows, as an option, the use of best estimate (BE) plus uncertainty methods in safety analysis. The key feature of this licensing option relates to quantification of the uncertainty in the determination that an NPP has a {open_quotes}low{close_quotes} probability of violating the safety criteria specified in 10 CFR 50. To support the 1988 licensing revision, the USNRC and its contractors developed the CSAU evaluation methodology to demonstrate the feasibility of the BE plus uncertainty approach. The PIRT process, Step 3 in the CSAU methodology, was originally formulated to support the BE plus uncertainty licensing option as executed in the CSAU approach to safety analysis. Subsequent work has shown the PIRT process to be a much more powerful tool than conceived in its original form. Through further development and application, the PIRT process has shown itself to be a robust means to establish safety analysis computer code phenomenological requirements in their order of importance to such analyses. Used early in research directed toward these objectives, PIRT results also provide the technical basis and cost effective organization for new experimental programs needed to improve the safety analysis codes for new applications. The primary purpose of this paper is to describe the generic PIRT process, including typical and common illustrations from prior applications. The secondary objective is to provide guidance to future applications of the process to help them focus, in a graded approach, on systems, components, processes and phenomena that have been common in several prior applications.

  19. A System Structure for a VHTR-SI Process Dynamic Simulation Code

    International Nuclear Information System (INIS)

    Chang, Jiwoon; Shin, Youngjoon; Kim, Jihwan; Lee, Kiyoung; Lee, Wonjae; Chang, Jonghwa; Youn, Cheung

    2008-01-01

    The VHTR-SI process dynamic simulation code embedded in a mathematical solution engine is an application software system that simulates the dynamic behavior of the VHTR-SI process. Also, the software system supports a user friendly graphical user interface (GUI) for user input/out. Structured analysis techniques were developed in the late 1970s by Yourdon, DeMarco, Gane and Sarson for applying a systematic approach to a systems analysis. It included the use of data flow diagrams and data modeling and fostered the use of an implementation-independent graphical notation for a documentation. In this paper, we present a system structure for a VHRT-SI process dynamic simulation code by using the methodologies of structured analysis

  20. Development of the object-oriented analysis code for the estimation of material balance in pyrochemical reprocessing process (2). Modification of the code for the analysis of holdup of nuclear materials in the process

    International Nuclear Information System (INIS)

    Okamura, Nobuo; Tanaka, Hiroshi

    2001-04-01

    Pyrochemical reprocessing is thought to be promising process for FBR fuel cycle mainly from the economical viewpoint. However, the material behavior in the process is not clarified enough because of the lack of experimental data. The authors have been developed the object-oriented analysis code for the estimation of material balance in the process, which has the flexible applicability for the change of process flow sheet. The objective of this study is to modify the code so as to analyze the holdup of nuclear materials in the pyrochemical process from the viewpoint of safeguard, because of the possibility of larger amount of the holdup in the process compared with aqueous process. As a result of the modification, the relationship between the production of nuclear materials and its holdup in the process can be evaluated by the code. (author)

  1. Load balancing in highly parallel processing of Monte Carlo code for particle transport

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Takemiya, Hiroshi; Kawasaki, Takuji

    1998-01-01

    In parallel processing of Monte Carlo (MC) codes for neutron, photon and electron transport problems, particle histories are assigned to processors making use of independency of the calculation for each particle. Although we can easily parallelize main part of a MC code by this method, it is necessary and practically difficult to optimize the code concerning load balancing in order to attain high speedup ratio in highly parallel processing. In fact, the speedup ratio in the case of 128 processors remains in nearly one hundred times when using the test bed for the performance evaluation. Through the parallel processing of the MCNP code, which is widely used in the nuclear field, it is shown that it is difficult to attain high performance by static load balancing in especially neutron transport problems, and a load balancing method, which dynamically changes the number of assigned particles minimizing the sum of the computational and communication costs, overcomes the difficulty, resulting in nearly fifteen percentage of reduction for execution time. (author)

  2. Development and preliminary validation of flux map processing code MAPLE

    International Nuclear Information System (INIS)

    Li Wenhuai; Zhang Xiangju; Dang Zhen; Chen Ming'an; Lu Haoliang; Li Jinggang; Wu Yuanbao

    2013-01-01

    The self-reliant flux map processing code MAPLE was developed by China General Nuclear Power Corporation (CGN). Weight coefficient method (WCM), polynomial expand method (PEM) and thin plane spline (TPS) method were applied to fit the deviation between measured and predicted detector signal results for two-dimensional radial plane, to interpolate or extrapolate the non-instrumented location deviation. Comparison of results in the test cases shows that the TPS method can better capture the information of curved fitting lines than the other methods. The measured flux map data of the Lingao Nuclear Power Plant were processed using MAPLE as validation test cases, combined with SMART code. Validation results show that the calculation results of MAPLE are reasonable and satisfied. (authors)

  3. A computer code simulating multistage chemical exchange column under wide range of operating conditions

    International Nuclear Information System (INIS)

    Yamanishi, Toshihiko; Okuno, Kenji

    1996-09-01

    A computer code has been developed to simulate a multistage CECE(Combined Electrolysis Chemical Exchange) column. The solution of basic equations can be found out by the Newton-Raphson method. The independent variables are the atom fractions of D and T in each stage for the case where H is dominant within the column. These variables are replaced by those of H and T under the condition that D is dominant. Some effective techniques have also been developed to get a set of solutions of the basic equations: a setting procedure of initial values of the independent variables; and a procedure for the convergence of the Newton-Raphson method. The computer code allows us to simulate the column behavior under a wide range of the operating conditions. Even for a severe case, where the dominant species changes along the column height, the code can give a set of solutions of the basic equations. (author)

  4. Catalogue of nuclear fusion codes - 1976

    International Nuclear Information System (INIS)

    1976-10-01

    A catalogue is presented of the computer codes in nuclear fusion research developed by JAERI, Division of Thermonuclear Fusion Research and Division of Large Tokamak Development in particular. It contains a total of about 100 codes under the categories: Atomic Process, Data Handling, Experimental Data Processing, Engineering, Input and Output, Special Languages and Their Application, Mathematical Programming, Miscellaneous, Numerical Analysis, Nuclear Physics, Plasma Physics and Fusion Research, Plasma Simulation and Numerical Technique, Reactor Design, Solid State Physics, Statistics, and System Program. (auth.)

  5. SHARAKU: an algorithm for aligning and clustering read mapping profiles of deep sequencing in non-coding RNA processing.

    Science.gov (United States)

    Tsuchiya, Mariko; Amano, Kojiro; Abe, Masaya; Seki, Misato; Hase, Sumitaka; Sato, Kengo; Sakakibara, Yasubumi

    2016-06-15

    Deep sequencing of the transcripts of regulatory non-coding RNA generates footprints of post-transcriptional processes. After obtaining sequence reads, the short reads are mapped to a reference genome, and specific mapping patterns can be detected called read mapping profiles, which are distinct from random non-functional degradation patterns. These patterns reflect the maturation processes that lead to the production of shorter RNA sequences. Recent next-generation sequencing studies have revealed not only the typical maturation process of miRNAs but also the various processing mechanisms of small RNAs derived from tRNAs and snoRNAs. We developed an algorithm termed SHARAKU to align two read mapping profiles of next-generation sequencing outputs for non-coding RNAs. In contrast with previous work, SHARAKU incorporates the primary and secondary sequence structures into an alignment of read mapping profiles to allow for the detection of common processing patterns. Using a benchmark simulated dataset, SHARAKU exhibited superior performance to previous methods for correctly clustering the read mapping profiles with respect to 5'-end processing and 3'-end processing from degradation patterns and in detecting similar processing patterns in deriving the shorter RNAs. Further, using experimental data of small RNA sequencing for the common marmoset brain, SHARAKU succeeded in identifying the significant clusters of read mapping profiles for similar processing patterns of small derived RNA families expressed in the brain. The source code of our program SHARAKU is available at http://www.dna.bio.keio.ac.jp/sharaku/, and the simulated dataset used in this work is available at the same link. Accession code: The sequence data from the whole RNA transcripts in the hippocampus of the left brain used in this work is available from the DNA DataBank of Japan (DDBJ) Sequence Read Archive (DRA) under the accession number DRA004502. yasu@bio.keio.ac.jp Supplementary data are available

  6. Keeping a common bawdy house becomes a "serious offence" under Criminal Code.

    Science.gov (United States)

    2010-10-01

    New federal regulations targeting organized crime will make keeping a common bawdy house a "serious offence" under the Criminal Code. Sex work advocates reacted by calling the measure a serious step back that will undermine the protection of sex workers' human rights, safety, dignity and health.

  7. A code inspection process for security reviews

    Science.gov (United States)

    Garzoglio, Gabriele

    2010-04-01

    In recent years, it has become more and more evident that software threat communities are taking an increasing interest in Grid infrastructures. To mitigate the security risk associated with the increased numbers of attacks, the Grid software development community needs to scale up effort to reduce software vulnerabilities. This can be achieved by introducing security review processes as a standard project management practice. The Grid Facilities Department of the Fermilab Computing Division has developed a code inspection process, tailored to reviewing security properties of software. The goal of the process is to identify technical risks associated with an application and their impact. This is achieved by focusing on the business needs of the application (what it does and protects), on understanding threats and exploit communities (what an exploiter gains), and on uncovering potential vulnerabilities (what defects can be exploited). The desired outcome of the process is an improvement of the quality of the software artifact and an enhanced understanding of possible mitigation strategies for residual risks. This paper describes the inspection process and lessons learned on applying it to Grid middleware.

  8. A code inspection process for security reviews

    International Nuclear Information System (INIS)

    Garzoglio, Gabriele

    2010-01-01

    In recent years, it has become more and more evident that software threat communities are taking an increasing interest in Grid infrastructures. To mitigate the security risk associated with the increased numbers of attacks, the Grid software development community needs to scale up effort to reduce software vulnerabilities. This can be achieved by introducing security review processes as a standard project management practice. The Grid Facilities Department of the Fermilab Computing Division has developed a code inspection process, tailored to reviewing security properties of software. The goal of the process is to identify technical risks associated with an application and their impact. This is achieved by focusing on the business needs of the application (what it does and protects), on understanding threats and exploit communities (what an exploiter gains), and on uncovering potential vulnerabilities (what defects can be exploited). The desired outcome of the process is an improvement of the quality of the software artifact and an enhanced understanding of possible mitigation strategies for residual risks. This paper describes the inspection process and lessons learned on applying it to Grid middleware.

  9. A code inspection process for security reviews

    Energy Technology Data Exchange (ETDEWEB)

    Garzoglio, Gabriele; /Fermilab

    2009-05-01

    In recent years, it has become more and more evident that software threat communities are taking an increasing interest in Grid infrastructures. To mitigate the security risk associated with the increased numbers of attacks, the Grid software development community needs to scale up effort to reduce software vulnerabilities. This can be achieved by introducing security review processes as a standard project management practice. The Grid Facilities Department of the Fermilab Computing Division has developed a code inspection process, tailored to reviewing security properties of software. The goal of the process is to identify technical risks associated with an application and their impact. This is achieved by focusing on the business needs of the application (what it does and protects), on understanding threats and exploit communities (what an exploiter gains), and on uncovering potential vulnerabilities (what defects can be exploited). The desired outcome of the process is an improvement of the quality of the software artifact and an enhanced understanding of possible mitigation strategies for residual risks. This paper describes the inspection process and lessons learned on applying it to Grid middleware.

  10. PERFORMANCE ANALYSIS OF OPTICAL CDMA SYSTEM USING VC CODE FAMILY UNDER VARIOUS OPTICAL PARAMETERS

    Directory of Open Access Journals (Sweden)

    HASSAN YOUSIF AHMED

    2012-06-01

    Full Text Available The intent of this paper is to study the performance of spectral-amplitude coding optical code-division multiple-access (OCDMA systems using Vector Combinatorial (VC code under various optical parameters. This code can be constructed by an algebraic way based on Euclidian vectors for any positive integer number. One of the important properties of this code is that the maximum cross-correlation is always one which means that multi-user interference (MUI and phase induced intensity noise are reduced. Transmitter and receiver structures based on unchirped fiber Bragg grating (FBGs using VC code and taking into account effects of the intensity, shot and thermal noise sources is demonstrated. The impact of the fiber distance effects on bit error rate (BER is reported using a commercial optical systems simulator, virtual photonic instrument, VPITM. The VC code is compared mathematically with reported codes which use similar techniques. We analyzed and characterized the fiber link, received power, BER and channel spacing. The performance and optimization of VC code in SAC-OCDMA system is reported. By comparing the theoretical and simulation results taken from VPITM, we have demonstrated that, for a high number of users, even if data rate is higher, the effective power source is adequate when the VC is used. Also it is found that as the channel spacing width goes from very narrow to wider, the BER decreases, best performance occurs at a spacing bandwidth between 0.8 and 1 nm. We have shown that the SAC system utilizing VC code significantly improves the performance compared with the reported codes.

  11. Tardos fingerprinting codes in the combined digit model

    NARCIS (Netherlands)

    Skoric, B.; Katzenbeisser, S.; Schaathun, H.G.; Celik, M.U.

    2009-01-01

    We introduce a new attack model for collusion-secure codes, called the combined digit model, which represents signal processing attacks against the underlying watermarking level better than existing models. In this paper, we analyze the performance of two variants of the Tardos code and show that

  12. Performance analysis of simultaneous dense coding protocol under decoherence

    Science.gov (United States)

    Huang, Zhiming; Zhang, Cai; Situ, Haozhen

    2017-09-01

    The simultaneous dense coding (SDC) protocol is useful in designing quantum protocols. We analyze the performance of the SDC protocol under the influence of noisy quantum channels. Six kinds of paradigmatic Markovian noise along with one kind of non-Markovian noise are considered. The joint success probability of both receivers and the success probabilities of one receiver are calculated for three different locking operators. Some interesting properties have been found, such as invariance and symmetry. Among the three locking operators we consider, the SWAP gate is most resistant to noise and results in the same success probabilities for both receivers.

  13. Performance evaluation based on data from code reviews

    OpenAIRE

    Andrej, Sekáč

    2016-01-01

    Context. Modern code review tools such as Gerrit have made available great amounts of code review data from different open source projects as well as other commercial projects. Code reviews are used to keep the quality of produced source code under control but the stored data could also be used for evaluation of the software development process. Objectives. This thesis uses machine learning methods for an approximation of review expert’s performance evaluation function. Due to limitations in ...

  14. Dilute: A code for studying beam evolution under rf noise

    International Nuclear Information System (INIS)

    Shih, H.; Ellison, J.A.; Schiesser, W.E.

    1993-01-01

    Longitudinal beam dynamics under rf noise has been modeled by Dome, Krinsky, and Wang using a diffusion-in-action PDE. If the primary interest is the evolution of the beam in action, it is much simpler to integrate the model PDE than to undertake tracking simulations. Here we describe the code that we developed to solve the model PDE using the numerical Method of Lines. Features of the code include (1) computation of the distribution in action for the initial beam from a Gaussian or user-supplied distribution in longitudinal phase space, (2) computation of the diffusion coefficient for white noise or from a user-supplied spectral density for non-white noise, (3) discretization of the model PDE using finite-difference or Galerkin finite-element approximations with a uniform or non-uniform grid, and (4) integration of the system of ODEs in time by the solver RKF45 or a user-supplied ODE solver

  15. 78 FR 18321 - International Code Council: The Update Process for the International Codes and Standards

    Science.gov (United States)

    2013-03-26

    ... Energy Conservation Code. International Existing Building Code. International Fire Code. International... Code. International Property Maintenance Code. International Residential Code. International Swimming Pool and Spa Code International Wildland-Urban Interface Code. International Zoning Code. ICC Standards...

  16. Rod behaviour under base load, load follow and frequency control operation: CYRANO 2 code predictions versus experimental results

    International Nuclear Information System (INIS)

    Gautier, B.; Raybaud, A.

    1984-01-01

    The French PWR reactors are now currently operating under load follow and frequency control. In order to demonstrate that these operating conditions were not able to increase the fuel failure rate, fuel rod behaviour calculations have been performed by E.D.F. with CYRANO 2 code. In parallel with these theoretical calculations, code predictions have been compared to experimental results. The paper presents some of the comparisons performed on 17x17 fuel irradiated in FESSENHEIM 2 up to 30 GWd/tU under base load operation and in the CAP reactor under load follow and frequency control conditions. It is shown that experimental results can be predicted with a reasonable accuracy by CYRANO 2 code. The experimental work was carried out under joint R and D programs by EDF, FRAGEMA, CEA, and WESTINGHOUSE (CAP program by French partners only). (author)

  17. Samovar: a thermomechanical code for modeling of geodynamic processes in the lithosphere-application to basin evolution

    DEFF Research Database (Denmark)

    Elesin, Y; Gerya, T; Artemieva, Irina

    2010-01-01

    We present a new 2D finite difference code, Samovar, for high-resolution numerical modeling of complex geodynamic processes. Examples are collision of lithospheric plates (including mountain building and subduction) and lithosphere extension (including formation of sedimentary basins, regions...... of extended crust, and rift zones). The code models deformation of the lithosphere with viscoelastoplastic rheology, including erosion/sedimentation processes and formation of shear zones in areas of high stresses. It also models steady-state and transient conductive and advective thermal processes including...... partial melting and magma transport in the lithosphere. The thermal and mechanical parts of the code are tested for a series of physical problems with analytical solutions. We apply the code to geodynamic modeling by examining numerically the processes of lithosphere extension and basin formation...

  18. Improved inter-assembly heat transfer modeling under low flow conditions for the Super System Code (SSC)

    International Nuclear Information System (INIS)

    Horak, W.C.; Guppy, J.G.

    1984-01-01

    The Super System Code (SSC) was developed at the Brookhaven National Laboratory (BNL) for the thermal hydraulic analysis of natural circulation transients, operational transients, and other system wide transients in nuclear power plants. SSC is a generic, best estimate code that models the in-vessel components, heat transport loops, plant protection systems and plant control systems. SSC also simulates the balance of plant when interfaced with the MINET code. SSC has been validated against both numerical and experimental data bases and is now used by several outside users. An important area of interest in LMFBR transient analysis is the prediction of the response of the reactor core under low flow conditions, such as experienced during a natural circulation event. Under these circumstances there are many physical phenomena which must be modeled to provide an adequate representation by a computer code simulation. The present version of SSC contains numerous models which account for most of the major phenomena. However, one area where the present model in SSC is being improved is in the representation of heat transfer and buoyancy effects under low flow operation. To properly improve the present version, the addition of models to represent certain inter-assembly effects is required

  19. The vector and parallel processing of MORSE code on Monte Carlo Machine

    International Nuclear Information System (INIS)

    Hasegawa, Yukihiro; Higuchi, Kenji.

    1995-11-01

    Multi-group Monte Carlo Code for particle transport, MORSE is modified for high performance computing on Monte Carlo Machine Monte-4. The method and the results are described. Monte-4 was specially developed to realize high performance computing of Monte Carlo codes for particle transport, which have been difficult to obtain high performance in vector processing on conventional vector processors. Monte-4 has four vector processor units with the special hardware called Monte Carlo pipelines. The vectorization and parallelization of MORSE code and the performance evaluation on Monte-4 are described. (author)

  20. Energy meshing techniques for processing ENDF/B-VI cross sections using the AMPX code system

    International Nuclear Information System (INIS)

    Dunn, M.E.; Greene, N.M.; Leal, L.C.

    1999-01-01

    Modern techniques for the establishment of criticality safety for fissile systems invariably require the use of neutronic transport codes with applicable cross-section data. Accurate cross-section data are essential for solving the Boltzmann Transport Equation for fissile systems. In the absence of applicable critical experimental data, the use of independent calculational methods is crucial for the establishment of subcritical limits. Moreover, there are various independent modern transport codes available to the criticality safety analyst (e.g., KENO V.a., MCNP, and MONK). In contrast, there is currently only one complete software package that processes data from the Version 6 format of the Evaluated Nuclear Data File (ENDF) to a format useable by criticality safety codes. To facilitate independent cross-section processing, Oak Ridge National Laboratory (ORNL) is upgrading the AMPX code system to enable independent processing of Version 6 formats using state-of-the-art procedures. The AMPX code system has been in continuous use at ORNL since the early 1970s and is the premier processor for providing multigroup cross sections for criticality safety analysis codes. Within the AMPX system, the module POLIDENT is used to access the resonance parameters in File 2 of an ENDF/B library, generate point cross-section data, and combine the cross sections with File 3 point data. At the heart of any point cross-section processing code is the generation of a suitable energy mesh for representing the data. The purpose of this work is to facilitate the AMPX upgrade through the development of a new and innovative energy meshing technique for processing point cross-section data

  1. The "periodic table" of the genetic code: A new way to look at the code and the decoding process.

    Science.gov (United States)

    Komar, Anton A

    2016-01-01

    Henri Grosjean and Eric Westhof recently presented an information-rich, alternative view of the genetic code, which takes into account current knowledge of the decoding process, including the complex nature of interactions between mRNA, tRNA and rRNA that take place during protein synthesis on the ribosome, and it also better reflects the evolution of the code. The new asymmetrical circular genetic code has a number of advantages over the traditional codon table and the previous circular diagrams (with a symmetrical/clockwise arrangement of the U, C, A, G bases). Most importantly, all sequence co-variances can be visualized and explained based on the internal logic of the thermodynamics of codon-anticodon interactions.

  2. Summary of ENDF/B Pre-Processing Codes June 1983

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1983-06-01

    This is the summary documentation for the 1983 version of the ENDF/B Pre-Processing Codes LINEAR, RECENT, SIGMA1, GROUPIE, EVALPLOT, MERGER, DICTION, COMPLOT, CONVERT. This summary documentation is merely a copy of the comment cards that appear at the beginning of each programme; these comment cards always reflect the latest status of input options, etc

  3. Classification of working processes to facilitate occupational hazard coding on industrial trawlers

    DEFF Research Database (Denmark)

    Jensen, Olaf C; Stage, Søren; Noer, Preben

    2003-01-01

    BACKGROUND: Commercial fishing is an extremely dangerous economic activity. In order to more accurately describe the risks involved, a specific injury coding based on the working process was developed. METHOD: Observation on six different types of vessels was conducted and allowed a description...... and a classification of the principal working processes on all kinds of vessels and a detailed classification for industrial trawlers. In industrial trawling, fish are landed for processing purposes, for example, for the production of fish oil and fish meal. The classification was subsequently used to code...... the injuries reported to the Danish Maritime Authority over a 5-year period. RESULTS: On industrial trawlers, 374 of 394 (95%) injuries were captured by the classification. Setting out and hauling in the gear and nets were the processes with the most injuries and accounted for 58.9% of all injuries...

  4. A Test of Two Alternative Cognitive Processing Models: Learning Styles and Dual Coding

    Science.gov (United States)

    Cuevas, Joshua; Dawson, Bryan L.

    2018-01-01

    This study tested two cognitive models, learning styles and dual coding, which make contradictory predictions about how learners process and retain visual and auditory information. Learning styles-based instructional practices are common in educational environments despite a questionable research base, while the use of dual coding is less…

  5. Concreteness effects in semantic processing: ERP evidence supporting dual-coding theory.

    Science.gov (United States)

    Kounios, J; Holcomb, P J

    1994-07-01

    Dual-coding theory argues that processing advantages for concrete over abstract (verbal) stimuli result from the operation of 2 systems (i.e., imaginal and verbal) for concrete stimuli, rather than just 1 (for abstract stimuli). These verbal and imaginal systems have been linked with the left and right hemispheres of the brain, respectively. Context-availability theory argues that concreteness effects result from processing differences in a single system. The merits of these theories were investigated by examining the topographic distribution of event-related brain potentials in 2 experiments (lexical decision and concrete-abstract classification). The results were most consistent with dual-coding theory. In particular, different scalp distributions of an N400-like negativity were elicited by concrete and abstract words.

  6. Current status of high energy nucleon-meson transport code

    Energy Technology Data Exchange (ETDEWEB)

    Takada, Hiroshi; Sasa, Toshinobu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    Current status of design code of accelerator (NMTC/JAERI code), outline of physical model and evaluation of accuracy of code were reported. To evaluate the nuclear performance of accelerator and strong spallation neutron origin, the nuclear reaction between high energy proton and target nuclide and behaviors of various produced particles are necessary. The nuclear design of spallation neutron system used a calculation code system connected the high energy nucleon{center_dot}meson transport code and the neutron{center_dot}photon transport code. NMTC/JAERI is described by the particle evaporation process under consideration of competition reaction of intranuclear cascade and fission process. Particle transport calculation was carried out for proton, neutron, {pi}- and {mu}-meson. To verify and improve accuracy of high energy nucleon-meson transport code, data of spallation and spallation neutron fragment by the integral experiment were collected. (S.Y.)

  7. An evaluation of the effect of JPEG, JPEG2000, and H.264/AVC on CQR codes decoding process

    Science.gov (United States)

    Vizcarra Melgar, Max E.; Farias, Mylène C. Q.; Zaghetto, Alexandre

    2015-02-01

    This paper presents a binarymatrix code based on QR Code (Quick Response Code), denoted as CQR Code (Colored Quick Response Code), and evaluates the effect of JPEG, JPEG2000 and H.264/AVC compression on the decoding process. The proposed CQR Code has three additional colors (red, green and blue), what enables twice as much storage capacity when compared to the traditional black and white QR Code. Using the Reed-Solomon error-correcting code, the CQR Code model has a theoretical correction capability of 38.41%. The goal of this paper is to evaluate the effect that degradations inserted by common image compression algorithms have on the decoding process. Results show that a successful decoding process can be achieved for compression rates up to 0.3877 bits/pixel, 0.1093 bits/pixel and 0.3808 bits/pixel for JPEG, JPEG2000 and H.264/AVC formats, respectively. The algorithm that presents the best performance is the H.264/AVC, followed by the JPEG2000, and JPEG.

  8. Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process

    Science.gov (United States)

    McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.

    1999-01-01

    This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.

  9. KUPOL-M code for simulation of the VVER's accident localization system under LOCA conditions

    International Nuclear Information System (INIS)

    Efanov, A.D.; Lukyanov, A.A.; Shangin, N.N.; Zajtsev, A.A.; Solov'ev, S.L.

    2004-01-01

    Computer code KUPOL-M is developed for analysis of thermodynamic parameters of medium within full pressure containment for NPPs with VVER under LOCA conditions. The analysis takes into account the effects of non-stationary heat-mass transfer of gas-drop mixture in the containment compartments with natural convection, volume and surface steam condensation in the presence of noncondensables, heat-mass exchange of the compartment atmosphere with water in the sumps. The operation of the main safety systems like a spray system, hydrogen catalytic recombiners, emergency core cooling pumps, valves and a fan system is simulated in KUPOL-M code. The main results of the code verification including the ones of the participation in ISP-47 International Standard Problem on containment thermal-hydraulics are presented. (author)

  10. Pre-processing of input files for the AZTRAN code; Pre procesamiento de archivos de entrada para el codigo AZTRAN

    Energy Technology Data Exchange (ETDEWEB)

    Vargas E, S. [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Ibarra, G., E-mail: samuel.vargas@inin.gob.mx [IPN, Av. Instituto Politecnico Nacional s/n, 07738 Ciudad de Mexico (Mexico)

    2017-09-15

    The AZTRAN code began to be developed in the Nuclear Engineering Department of the Escuela Superior de Fisica y Matematicas (ESFM) of the Instituto Politecnico Nacional (IPN) with the purpose of numerically solving various models arising from the physics and engineering of nuclear reactors. The code is still under development and is part of the AZTLAN platform: Development of a Mexican platform for the analysis and design of nuclear reactors. Due to the complexity to generate an input file for the code, a script based on D language is developed, with the purpose of making its elaboration easier, based on a new input file format which includes specific cards, which have been divided into two blocks, mandatory cards and optional cards, including a pre-processing of the input file to identify possible errors within it, as well as an image generator for the specific problem based on the python interpreter. (Author)

  11. Use of NESTLE computer code for NPP transition process analysis

    International Nuclear Information System (INIS)

    Gal'chenko, V.V.

    2001-01-01

    A newly created WWER-440 reactor model with use NESTLE code is discussed. Results of 'fast' and 'slow' transition processes based on it are presented. This model was developed for Rovno NPP reactor and it can be used also for WWER-1000 reactor in Zaporozhe NPP

  12. Development of thermal-hydraulic safety codes for HTGRs with gas-turbine and hydrogen process cycles

    International Nuclear Information System (INIS)

    No, Hee Cheon; Yoon, Ho Joon; Lee, Byung Jin; Kim, Yong Soo; Jin, Hyeng Gon; Kim, Ji Hwan; Kim, Hyeun Min; Lim, Hong Sik

    2008-01-01

    We present three nuclear/hydrogen-related R and D activities being performed at KAIST: air-ingressed LOCA analysis code development, gas turbine analysis tool development, and hydrogen-production system analysis model development. The ICE numerical technique widely used for the safety analysis of water-reactors is successfully implemented into GAMMA in which we solve the basic equations for continuity, momentum conservation, energy conservation of the gas mixture, and mass conservation of 6 species (He, N2, O2, CO, CO2, and H2O). GAMMA has been extensively validated using data from 14 test facilities. We developed SANA code to predict the characteristics of HTGR helium turbines based on the throughflow calculation with a Newton-Raphson method that overcomes the weakness of the conventional method based on the successive iteration scheme. It is found out that the current method reaches stable and quick convergence even under the off-normal condition with the same degree of accuracy. The GAMMA-SANA coupled code was assessed by comparing its results with the steady-state of the GTHTR300, and the load reduction transient was simulated for the 100% to 70% power operation. The calculation results confirm that two-dimensional throughflow modeling can be successfully used to describe the gas turbine behavior. The dynamic equations for the distillation column of the HI process in the I-S cycle are described with 4 material components involved in the HI process: H2O, HI, I2, and H2. For the VLE prediction in the HI process we improved the Neumann model based on the NRTL (Non-Random Two-Liquid) model. Relative to the experimental data, the improved Neumann model shows deviations of 8.6% in maximum and 2.5% in average for the total pressure, and 9.5% in maximum for the liquid-liquid separation composition. Through a parametric analysis using the published experimental data related to the Bunsen reaction and liquid-liquid separation, an optimized operating condition for the

  13. Steam condensation modelling in aerosol codes

    International Nuclear Information System (INIS)

    Dunbar, I.H.

    1986-01-01

    The principal subject of this study is the modelling of the condensation of steam into and evaporation of water from aerosol particles. These processes introduce a new type of term into the equation for the development of the aerosol particle size distribution. This new term faces the code developer with three major problems: the physical modelling of the condensation/evaporation process, the discretisation of the new term and the separate accounting for the masses of the water and of the other components. This study has considered four codes which model the condensation of steam into and its evaporation from aerosol particles: AEROSYM-M (UK), AEROSOLS/B1 (France), NAUA (Federal Republic of Germany) and CONTAIN (USA). The modelling in the codes has been addressed under three headings. These are the physical modelling of condensation, the mathematics of the discretisation of the equations, and the methods for modelling the separate behaviour of different chemical components of the aerosol. The codes are least advanced in area of solute effect modelling. At present only AEROSOLS/B1 includes the effect. The effect is greater for more concentrated solutions. Codes without the effect will be more in error (underestimating the total airborne mass) the less condensation they predict. Data are needed on the water vapour pressure above concentrated solutions of the substances of interest (especially CsOH and CsI) if the extent to which aerosols retain water under superheated conditions is to be modelled. 15 refs

  14. Codes of conduct: An extra suave instrument of EU governance?

    DEFF Research Database (Denmark)

    Borras, Susana

    able to coordinate actors successfully (effectiveness)? and secondly, under what conditions are codes of conduct able to generate democratically legitimate political processes? The paper examines carefully a recent case study, the “Code of Conduct for the Recruitment of Researchers” (CCRR). The code...... establishes a specific set of voluntary norms and principles that shall guide the recruiting process of researchers by European research organizations (universities, public research organizations and firms) in the 33 countries of the single market minded initiative of the European Research Area. A series...

  15. Code Modernization of VPIC

    Science.gov (United States)

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  16. Detecting non-coding selective pressure in coding regions

    Directory of Open Access Journals (Sweden)

    Blanchette Mathieu

    2007-02-01

    Full Text Available Abstract Background Comparative genomics approaches, where orthologous DNA regions are compared and inter-species conserved regions are identified, have proven extremely powerful for identifying non-coding regulatory regions located in intergenic or intronic regions. However, non-coding functional elements can also be located within coding region, as is common for exonic splicing enhancers, some transcription factor binding sites, and RNA secondary structure elements affecting mRNA stability, localization, or translation. Since these functional elements are located in regions that are themselves highly conserved because they are coding for a protein, they generally escaped detection by comparative genomics approaches. Results We introduce a comparative genomics approach for detecting non-coding functional elements located within coding regions. Codon evolution is modeled as a mixture of codon substitution models, where each component of the mixture describes the evolution of codons under a specific type of coding selective pressure. We show how to compute the posterior distribution of the entropy and parsimony scores under this null model of codon evolution. The method is applied to a set of growth hormone 1 orthologous mRNA sequences and a known exonic splicing elements is detected. The analysis of a set of CORTBP2 orthologous genes reveals a region of several hundred base pairs under strong non-coding selective pressure whose function remains unknown. Conclusion Non-coding functional elements, in particular those involved in post-transcriptional regulation, are likely to be much more prevalent than is currently known. With the numerous genome sequencing projects underway, comparative genomics approaches like that proposed here are likely to become increasingly powerful at detecting such elements.

  17. 28 CFR 522.12 - Relationship between existing criminal sentences imposed under the U.S. or D.C. Code and new...

    Science.gov (United States)

    2010-07-01

    ... sentences imposed under the U.S. or D.C. Code and new civil contempt commitment orders. 522.12 Section 522..., AND TRANSFER ADMISSION TO INSTITUTION Civil Contempt of Court Commitments § 522.12 Relationship between existing criminal sentences imposed under the U.S. or D.C. Code and new civil contempt commitment...

  18. Coded ultrasound for blood flow estimation using subband processing

    DEFF Research Database (Denmark)

    Gran, Fredrik; Udesen, Jesper; Nielsen, Michael bachmann

    2007-01-01

    This paper further investigates the use of coded excitation for blood flow estimation in medical ultrasound. Traditional autocorrelation estimators use narrow-band excitation signals to provide sufficient signal-to-noise-ratio (SNR) and velocity estimation performance. In this paper, broadband...... coded signals are used to increase SNR, followed by sub-band processing. The received broadband signal, is filtered using a set of narrow-band filters. Estimating the velocity in each of the bands and averaging the results yields better performance compared to what would be possible when transmitting...... a narrow-band pulse directly. Also, the spatial resolution of the narrow-band pulse would be too poor for brightness-mode (B-mode) imaging and additional transmissions would be required to update the B-mode image. In the described approach, there is no need for additional transmissions, because...

  19. 75 FR 19944 - International Code Council: The Update Process for the International Codes and Standards

    Science.gov (United States)

    2010-04-16

    ... documents from ICC's Chicago District Office: International Code Council, 4051 W Flossmoor Road, Country... Energy Conservation Code. International Existing Building Code. International Fire Code. International...

  20. Effects of Secondary Task Modality and Processing Code on Automation Trust and Utilization During Simulated Airline Luggage Screening

    Science.gov (United States)

    Phillips, Rachel; Madhavan, Poornima

    2010-01-01

    The purpose of this research was to examine the impact of environmental distractions on human trust and utilization of automation during the process of visual search. Participants performed a computer-simulated airline luggage screening task with the assistance of a 70% reliable automated decision aid (called DETECTOR) both with and without environmental distractions. The distraction was implemented as a secondary task in either a competing modality (visual) or non-competing modality (auditory). The secondary task processing code either competed with the luggage screening task (spatial code) or with the automation's textual directives (verbal code). We measured participants' system trust, perceived reliability of the system (when a target weapon was present and absent), compliance, reliance, and confidence when agreeing and disagreeing with the system under both distracted and undistracted conditions. Results revealed that system trust was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Perceived reliability of the system (when the target was present) was significantly higher when the secondary task was visual rather than auditory. Compliance with the aid increased in all conditions except for the auditory-verbal condition, where it decreased. Similar to the pattern for trust, reliance on the automation was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Confidence when agreeing with the system decreased with the addition of any kind of distraction; however, confidence when disagreeing increased with the addition of an auditory secondary task but decreased with the addition of a visual task. A model was developed to represent the research findings and demonstrate the relationship between secondary task modality, processing code, and automation use. Results suggest that the nature of environmental distractions influence

  1. Modification of fuel performance code to evaluate iron-based alloy behavior under LOCA scenario

    Energy Technology Data Exchange (ETDEWEB)

    Giovedi, Claudia; Martins, Marcelo Ramos, E-mail: claudia.giovedi@labrisco.usp.br, E-mail: mrmartin@usp.br [Laboratorio de Analise, Avaliacao e Gerenciamento de Risco (LabRisco/POLI/USP), São Paulo, SP (Brazil); Abe, Alfredo; Muniz, Rafael O.R.; Gomes, Daniel de Souza; Silva, Antonio Teixeira e, E-mail: ayabe@ipen.br, E-mail: dsgomes@ipen.br, E-mail: teixiera@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    Accident tolerant fuels (ATF) has been studied since the Fukushima Daiichi accident in the research efforts to develop new materials which under accident scenarios could maintain the fuel rod integrity for a longer period compared to the cladding and fuel system usually utilized in Pressurized Water Reactors (PWR). The efforts have been focused on new materials applied as cladding, then iron-base alloys appear as a possible candidate. The aim of this paper is to implement modifications in a fuel performance code to evaluate the behavior of iron based alloys under Loss-of-Coolant Accident (LOCA) scenario. For this, initially the properties related to the thermal and mechanical behavior of iron-based alloys were obtained from the literature, appropriately adapted and introduced in the fuel performance code subroutines. The adopted approach was step by step modifications, where different versions of the code were created. The assessment of the implemented modification was carried out simulating an experiment available in the open literature (IFA-650.5) related to zirconium-based alloy fuel rods submitted to LOCA conditions. The obtained results for the iron-based alloy were compared to those obtained using the regular version of the fuel performance code for zircaloy-4. The obtained results have shown that the most important properties to be changed are those from the subroutines related to the mechanical properties of the cladding. The results obtained have shown that the burst is observed at a longer time for fuel rods with iron-based alloy, indicating the potentiality of this material to be used as cladding with ATF purposes. (author)

  2. Modification of fuel performance code to evaluate iron-based alloy behavior under LOCA scenario

    International Nuclear Information System (INIS)

    Giovedi, Claudia; Martins, Marcelo Ramos; Abe, Alfredo; Muniz, Rafael O.R.; Gomes, Daniel de Souza; Silva, Antonio Teixeira e

    2017-01-01

    Accident tolerant fuels (ATF) has been studied since the Fukushima Daiichi accident in the research efforts to develop new materials which under accident scenarios could maintain the fuel rod integrity for a longer period compared to the cladding and fuel system usually utilized in Pressurized Water Reactors (PWR). The efforts have been focused on new materials applied as cladding, then iron-base alloys appear as a possible candidate. The aim of this paper is to implement modifications in a fuel performance code to evaluate the behavior of iron based alloys under Loss-of-Coolant Accident (LOCA) scenario. For this, initially the properties related to the thermal and mechanical behavior of iron-based alloys were obtained from the literature, appropriately adapted and introduced in the fuel performance code subroutines. The adopted approach was step by step modifications, where different versions of the code were created. The assessment of the implemented modification was carried out simulating an experiment available in the open literature (IFA-650.5) related to zirconium-based alloy fuel rods submitted to LOCA conditions. The obtained results for the iron-based alloy were compared to those obtained using the regular version of the fuel performance code for zircaloy-4. The obtained results have shown that the most important properties to be changed are those from the subroutines related to the mechanical properties of the cladding. The results obtained have shown that the burst is observed at a longer time for fuel rods with iron-based alloy, indicating the potentiality of this material to be used as cladding with ATF purposes. (author)

  3. Implementation of a dry process fuel cycle model into the DYMOND code

    International Nuclear Information System (INIS)

    Park, Joo Hwan; Jeong, Chang Joon; Choi, Hang Bok

    2004-01-01

    For the analysis of a dry process fuel cycle, new modules were implemented into the fuel cycle analysis code DYMOND, which was developed by the Argonne National Laboratory. The modifications were made to the energy demand prediction model, a Canada Deuterium Uranium (CANDU) reactor, direct use of spent Pressurized Water Reactor (PWR) fuel in CANDU reactors (DUPIC) fuel cycle model, the fuel cycle calculation module, and the input/output modules. The performance of the modified DYMOND code was assessed for the postulated once-through fuel cycle models including both the PWR and CANDU reactor. This paper presents modifications of the DYMOND code and the results of sample calculations for the PWR once-through and DUPIC fuel cycles

  4. Multi-processing CTH: Porting legacy FORTRAN code to MP hardware

    Energy Technology Data Exchange (ETDEWEB)

    Bell, R.L.; Elrick, M.G.; Hertel, E.S. Jr.

    1996-12-31

    CTH is a family of codes developed at Sandia National Laboratories for use in modeling complex multi-dimensional, multi-material problems that are characterized by large deformations and/or strong shocks. A two-step, second-order accurate Eulerian solution algorithm is used to solve the mass, momentum, and energy conservation equations. CTH has historically been run on systems where the data are directly accessible to the cpu, such as workstations and vector supercomputers. Multiple cpus can be used if all data are accessible to all cpus. This is accomplished by placing compiler directives or subroutine calls within the source code. The CTH team has implemented this scheme for Cray shared memory machines under the Unicos operating system. This technique is effective, but difficult to port to other (similar) shared memory architectures because each vendor has a different format of directives or subroutine calls. A different model of high performance computing is one where many (> 1,000) cpus work on a portion of the entire problem and communicate by passing messages that contain boundary data. Most, if not all, codes that run effectively on parallel hardware were written with a parallel computing paradigm in mind. Modifying an existing code written for serial nodes poses a significantly different set of challenges that will be discussed. CTH, a legacy FORTRAN code, has been modified to allow for solutions on distributed memory parallel computers such as the IBM SP2, the Intel Paragon, Cray T3D, or a network of workstations. The message passing version of CTH will be discussed and example calculations will be presented along with performance data. Current timing studies indicate that CTH is 2--3 times faster than equivalent C++ code written specifically for parallel hardware. CTH on the Intel Paragon exhibits linear speed up with problems that are scaled (constant problem size per node) for the number of parallel nodes.

  5. RODSWELL: a computer code for the thermomechanical analysis of fuel rods under LOCA conditions

    International Nuclear Information System (INIS)

    Casadei, F.; Laval, H.; Donea, J.; Jones, P.M.; Colombo, A.

    1984-01-01

    The present report is the user's manual for the computer code RODSWELL developed at the JRC-Ispra for the thermomechanical analysis of LWR fuel rods under simulated loss-of-coolant accident (LOCA) conditions. The code calculates the variation in space and time of all significant fuel rod variables, including fuel, gap and cladding temperature, fuel and cladding deformation, cladding oxidation and rod internal pressure. The essential characteristics of the code are briefly outlined here. The model is particularly designed to perform a full thermal and mechanical analysis in both the azimuthal and radial directions. Thus, azimuthal temperature gradients arising from pellet eccentricity, flux tilt, arbitrary distribution of heat sources in the fuel and the cladding and azimuthal variation of coolant conditions can be treated. The code combines a transient 2-dimensional heat conduction code and a 1-dimentional mechanical model for the cladding deformation. The fuel rod is divided into a number of axial sections and a detailed thermomechanical analysis is performed within each section in radial and azimuthal directions. In the following sections, instructions are given for the definition of the data files and the semi-variable dimensions. Then follows a complete description of the input data. Finally, the restart option is described

  6. Word attributes and lateralization revisited: implications for dual coding and discrete versus continuous processing.

    Science.gov (United States)

    Boles, D B

    1989-01-01

    Three attributes of words are their imageability, concreteness, and familiarity. From a literature review and several experiments, I previously concluded (Boles, 1983a) that only familiarity affects the overall near-threshold recognition of words, and that none of the attributes affects right-visual-field superiority for word recognition. Here these conclusions are modified by two experiments demonstrating a critical mediating influence of intentional versus incidental memory instructions. In Experiment 1, subjects were instructed to remember the words they were shown, for subsequent recall. The results showed effects of both imageability and familiarity on overall recognition, as well as an effect of imageability on lateralization. In Experiment 2, word-memory instructions were deleted and the results essentially reinstated the findings of Boles (1983a). It is concluded that right-hemisphere imagery processes can participate in word recognition under intentional memory instructions. Within the dual coding theory (Paivio, 1971), the results argue that both discrete and continuous processing modes are available, that the modes can be used strategically, and that continuous processing can occur prior to response stages.

  7. Modeling the UO2 ex-AUC pellet process and predicting the fuel rod temperature distribution under steady-state operating condition

    Science.gov (United States)

    Hung, Nguyen Trong; Thuan, Le Ba; Thanh, Tran Chi; Nhuan, Hoang; Khoai, Do Van; Tung, Nguyen Van; Lee, Jin-Young; Jyothi, Rajesh Kumar

    2018-06-01

    Modeling uranium dioxide pellet process from ammonium uranyl carbonate - derived uranium dioxide powder (UO2 ex-AUC powder) and predicting fuel rod temperature distribution were reported in the paper. Response surface methodology (RSM) and FRAPCON-4.0 code were used to model the process and to predict the fuel rod temperature under steady-state operating condition. Fuel rod design of AP-1000 designed by Westinghouse Electric Corporation, in these the pellet fabrication parameters are from the study, were input data for the code. The predictive data were suggested the relationship between the fabrication parameters of UO2 pellets and their temperature image in nuclear reactor.

  8. Facial expression coding in children and adolescents with autism: Reduced adaptability but intact norm-based coding.

    Science.gov (United States)

    Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise

    2018-05-01

    Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.

  9. Updated Covariance Processing Capabilities in the AMPX Code System

    International Nuclear Information System (INIS)

    Wiarda, Dorothea; Dunn, Michael E.

    2007-01-01

    A concerted effort is in progress within the nuclear data community to provide new cross-section covariance data evaluations to support sensitivity/uncertainty analyses of fissionable systems. The objective of this work is to update processing capabilities of the AMPX library to process the latest Evaluated Nuclear Data File (ENDF)/B formats to generate covariance data libraries for radiation transport software such as SCALE. The module PUFF-IV was updated to allow processing of new ENDF covariance formats in the resolved resonance region. In the resolved resonance region, covariance matrices are given in terms of resonance parameters, which need to be processed into covariance matrices with respect to the group-averaged cross-section data. The parameter covariance matrix can be quite large if the evaluation has many resonances. The PUFF-IV code has recently been used to process an evaluation of 235U, which was prepared in collaboration between Oak Ridge National Laboratory and Los Alamos National Laboratory.

  10. Recent development for the ITS code system: Parallel processing and visualization

    International Nuclear Information System (INIS)

    Fan, W.C.; Turner, C.D.; Halbleib, J.A. Sr.; Kensek, R.P.

    1996-01-01

    A brief overview is given for two software developments related to the ITS code system. These developments provide parallel processing and visualization capabilities and thus allow users to perform ITS calculations more efficiently. Timing results and a graphical example are presented to demonstrate these capabilities

  11. Process of cross section generation for radiation shielding calculations, using the NJOY code

    International Nuclear Information System (INIS)

    Ono, S.; Corcuera, R.P.

    1986-10-01

    The process of multigroup cross sections generation for radiation shielding calculations, using the NJOY code, is explained. Photon production cross sections, processed by the GROUPR module, and photon interaction cross sections processed by the GAMINR are given. These data are compared with the data produced by the AMPX system and published data. (author) [pt

  12. FARST: A computer code for the evaluation of FBR fuel rod behavior under steady-state/transient conditions

    International Nuclear Information System (INIS)

    Nakamura, M.; Sakagami, M.

    1984-01-01

    FARST, a computer code for the evaluation of fuel rod thermal and mechanical behavior under steady-state/transient conditions has been developed. The code characteristics are summarized as follows: (I) FARST evaluates the fuel rod behavior under the transient conditions. The code analyzes thermal and mechanical phenomena within a fuel rod, taking into account the temperature change in coolant surrounding the fuel rod. (II) Permanent strains such as plastic, creep and swelling strains as well as thermoelastic deformations can be analyzed by using the strain increment method. (III) Axial force and contact pressure which act on the fuel stack and cladding are analyzed based on the stick/slip conditions. (IV) FARST used a pellet swelling model which depends on the contact pressure between pellet and cladding, and an empirical pellet relocation model, designated as 'jump relocation model'. The code was successfully applied to analyses of the fuel rod irradiation data from pulse reactor for nuclear safety research in Cadarache (CABRI) and pulse reactor for nuclear safety research in Japan Atomic Energy Research Institute (NSRR). The code was further applied to stress analysis of a 1000 MW class large FBR plant fuel rod during transient conditions. The steady-state model which was used so far gave the conservative results for cladding stress during overpower transient, but underestimated the results for cladding stress during a rapid temperature decrease of coolant. (orig.)

  13. MHD code using multi graphical processing units: SMAUG+

    Science.gov (United States)

    Gyenge, N.; Griffiths, M. K.; Erdélyi, R.

    2018-01-01

    This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.

  14. Simulation codes of chemical separation process of spent fuel reprocessing. Tool for process development and safety research

    International Nuclear Information System (INIS)

    Asakura, Toshihide; Sato, Makoto; Matsumura, Masakazu; Morita, Yasuji

    2005-01-01

    This paper reviews the succeeding development and utilization of Extraction System Simulation Code for Advanced Reprocessing (ESSCAR). From the viewpoint of development, more tests with spent fuel and calculations should be performed with better understanding of the physico-chemical phenomena in a separation process. From the viewpoint of process safety research on fuel cycle facilities, it is important to know the process behavior of a key substance; being highly reactive but existing only trace amount. (author)

  15. The Therapy Process Observational Coding System for Child Psychotherapy Strategies Scale

    Science.gov (United States)

    McLeod, Bryce D.; Weisz, John R.

    2010-01-01

    Most everyday child and adolescent psychotherapy does not follow manuals that document the procedures. Consequently, usual clinical care has remained poorly understood and rarely studied. The Therapy Process Observational Coding System for Child Psychotherapy-Strategies scale (TPOCS-S) is an observational measure of youth psychotherapy procedures…

  16. A political perspective on business elites and institutional embeddedness in the UK code-issuing process

    NARCIS (Netherlands)

    Haxhi, I.; van Ees, H.; Sorge, A.

    2013-01-01

    Manuscript Type: Perspective Research Question/Issue: What is the role of institutional actors and business elites in the development of UK corporate governance codes? In the current paper, we explore the UK code-issuing process by focusing on the UK actors, their power and interplay. Research

  17. A Political Perspective on Business Elites and Institutional Embeddedness in the UK Code-Issuing Process

    NARCIS (Netherlands)

    Haxhi, Ilir; van Ees, Hans; Sorge, Arndt

    2013-01-01

    Manuscript TypePerspective Research Question/IssueWhat is the role of institutional actors and business elites in the development of UK corporate governance codes? In the current paper, we explore the UK code-issuing process by focusing on the UK actors, their power and interplay. Research

  18. A Political Perspective on Business Elites and Institutional Embeddedness in the UK Code-Issuing Process

    NARCIS (Netherlands)

    Haxhi, Ilir; van Ees, Hans; Sorge, Arndt

    Manuscript TypePerspective Research Question/IssueWhat is the role of institutional actors and business elites in the development of UK corporate governance codes? In the current paper, we explore the UK code-issuing process by focusing on the UK actors, their power and interplay. Research

  19. Performance Analysis of a De-correlated Modified Code Tracking Loop for Synchronous DS-CDMA System under Multiuser Environment

    Science.gov (United States)

    Wu, Ya-Ting; Wong, Wai-Ki; Leung, Shu-Hung; Zhu, Yue-Sheng

    This paper presents the performance analysis of a De-correlated Modified Code Tracking Loop (D-MCTL) for synchronous direct-sequence code-division multiple-access (DS-CDMA) systems under multiuser environment. Previous studies have shown that the imbalance of multiple access interference (MAI) in the time lead and time lag portions of the signal causes tracking bias or instability problem in the traditional correlating tracking loop like delay lock loop (DLL) or modified code tracking loop (MCTL). In this paper, we exploit the de-correlating technique to combat the MAI at the on-time code position of the MCTL. Unlike applying the same technique to DLL which requires an extensive search algorithm to compensate the noise imbalance which may introduce small tracking bias under low signal-to-noise ratio (SNR), the proposed D-MCTL has much lower computational complexity and exhibits zero tracking bias for the whole range of SNR, regardless of the number of interfering users. Furthermore, performance analysis and simulations based on Gold codes show that the proposed scheme has better mean square tracking error, mean-time-to-lose-lock and near-far resistance than the other tracking schemes, including traditional DLL (T-DLL), traditional MCTL (T-MCTL) and modified de-correlated DLL (MD-DLL).

  20. The Coding Causes of Death in HIV (CoDe) Project: initial results and evaluation of methodology

    DEFF Research Database (Denmark)

    Kowalska, Justyna D; Friis-Møller, Nina; Kirk, Ole

    2011-01-01

    The Coding Causes of Death in HIV (CoDe) Project aims to deliver a standardized method for coding the underlying cause of death in HIV-positive persons, suitable for clinical trials and epidemiologic studies.......The Coding Causes of Death in HIV (CoDe) Project aims to deliver a standardized method for coding the underlying cause of death in HIV-positive persons, suitable for clinical trials and epidemiologic studies....

  1. Fuel performance analysis code 'FAIR'

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.; Mahajan, S.C.; Kakodkar, A.

    1994-01-01

    For modelling nuclear reactor fuel rod behaviour of water cooled reactors under severe power maneuvering and high burnups, a mechanistic fuel performance analysis code FAIR has been developed. The code incorporates finite element based thermomechanical module, physically based fission gas release module and relevant models for modelling fuel related phenomena, such as, pellet cracking, densification and swelling, radial flux redistribution across the pellet due to the build up of plutonium near the pellet surface, pellet clad mechanical interaction/stress corrosion cracking (PCMI/SSC) failure of sheath etc. The code follows the established principles of fuel rod analysis programmes, such as coupling of thermal and mechanical solutions along with the fission gas release calculations, analysing different axial segments of fuel rod simultaneously, providing means for performing local analysis such as clad ridging analysis etc. The modular nature of the code offers flexibility in affecting modifications easily to the code for modelling MOX fuels and thorium based fuels. For performing analysis of fuel rods subjected to very long power histories within a reasonable amount of time, the code has been parallelised and is commissioned on the ANUPAM parallel processing system developed at Bhabha Atomic Research Centre (BARC). (author). 37 refs

  2. NADAC and MERGE: computer codes for processing neutron activation analysis data

    International Nuclear Information System (INIS)

    Heft, R.E.; Martin, W.E.

    1977-01-01

    Absolute disintegration rates of specific radioactive products induced by neutron irradition of a sample are determined by spectrometric analysis of gamma-ray emissions. Nuclide identification and quantification is carried out by a complex computer code GAMANAL (described elsewhere). The output of GAMANAL is processed by NADAC, a computer code that converts the data on observed distintegration rates to data on the elemental composition of the original sample. Computations by NADAC are on an absolute basis in that stored nuclear parameters are used rather than the difference between the observed disintegration rate and the rate obtained by concurrent irradiation of elemental standards. The NADAC code provides for the computation of complex cases including those involving interrupted irradiations, parent and daughter decay situations where the daughter may also be produced independently, nuclides with very short half-lives compared to counting interval, and those involving interference by competing neutron-induced reactions. The NADAC output consists of a printed report, which summarizes analytical results, and a card-image file, which can be used as input to another computer code MERGE. The purpose of MERGE is to combine the results of multiple analyses and produce a single final answer, based on all available information, for each element found

  3. Single integrated device for optical CDMA code processing in dual-code environment.

    Science.gov (United States)

    Huang, Yue-Kai; Glesk, Ivan; Greiner, Christoph M; Iazkov, Dmitri; Mossberg, Thomas W; Wang, Ting; Prucnal, Paul R

    2007-06-11

    We report on the design, fabrication and performance of a matching integrated optical CDMA encoder-decoder pair based on holographic Bragg reflector technology. Simultaneous encoding/decoding operation of two multiple wavelength-hopping time-spreading codes was successfully demonstrated and shown to support two error-free OCDMA links at OC-24. A double-pass scheme was employed in the devices to enable the use of longer code length.

  4. User's manual for seismic analysis code 'SONATINA-2V'

    International Nuclear Information System (INIS)

    Hanawa, Satoshi; Iyoku, Tatsuo

    2001-08-01

    The seismic analysis code, SONATINA-2V, has been developed to analyze the behavior of the HTTR core graphite components under seismic excitation. The SONATINA-2V code is a two-dimensional computer program capable of analyzing the vertical arrangement of the HTTR graphite components, such as fuel blocks, replaceable reflector blocks, permanent reflector blocks, as well as their restraint structures. In the analytical model, each block is treated as rigid body and is restrained by dowel pins which restrict relative horizontal movement but allow vertical and rocking motions between upper and lower blocks. Moreover, the SONATINA-2V code is capable of analyzing the core vibration behavior under both simultaneous excitations of vertical and horizontal directions. The SONATINA-2V code is composed of the main program, pri-processor for making the input data to SONATINA-2V and post-processor for data processing and making the graphics from analytical results. Though the SONATINA-2V code was developed in order to work in the MSP computer system of Japan Atomic Energy Research Institute (JAERI), the computer system was abolished with the technical progress of computer. Therefore, improvement of this analysis code was carried out in order to operate the code under the UNIX machine, SR8000 computer system, of the JAERI. The users manual for seismic analysis code, SONATINA-2V, including pri- and post-processor is given in the present report. (author)

  5. Development of a new flux map processing code for moveable detector system in PWR

    Energy Technology Data Exchange (ETDEWEB)

    Li, W.; Lu, H.; Li, J.; Dang, Z.; Zhang, X. [China Nuclear Power Technology Research Institute, 47 F/A Jiangsu Bldg., Yitian Road, Futian District, Shenzhen 518026 (China); Wu, Y.; Fan, X. [Information Technology Center, China Guangdong Nuclear Power Group, Shenzhen 518000 (China)

    2013-07-01

    This paper presents an introduction to the development of the flux map processing code MAPLE developed by China Nuclear Power Technology Research Institute (CNPPJ), China Guangdong Nuclear Power Group (CGN). The method to get the three-dimensional 'measured' power distribution according to measurement signal has also been described. Three methods, namely, Weight Coefficient Method (WCM), Polynomial Expand Method (PEM) and Thin Plane Spline (TPS) method, have been applied to fit the deviation between measured and predicted results for two-dimensional radial plane. The measured flux map data of the LINGAO nuclear power plant (NPP) is processed using MAPLE as a test case to compare the effectiveness of the three methods, combined with a 3D neutronics code COCO. Assembly power distribution results show that MAPLE results are reasonable and satisfied. More verification and validation of the MAPLE code will be carried out in future. (authors)

  6. Computer codes in particle transport physics

    International Nuclear Information System (INIS)

    Pesic, M.

    2004-01-01

    is given. Importance of validation and verification of data and computer codes is underlined briefly. Examples of applications of the MCNPX, FLUKA and SHIELD codes to simulation of some of processes in nature, from reactor physics, ion medical therapy, cross section calculations, design of accelerator driven sub-critical systems to astrophysics and shielding of spaceships, are shown. More reliable and more frequent cross sections data in intermediate and high- energy range for particles transport and interactions with mater are expected in near future, as a result of new experimental investigations that are under way with the aim to validate theoretical models applied currently in the codes. These new data libraries are expected to be much larger and more comprehensive than existing ones requiring more computer memory and faster CPUs. Updated versions of the codes to be developed in future, beside sequential computation versions, will also include the MPI or PVM options to allow faster ru: ming of the code at acceptable cost for an end-user. A new option to be implemented in the codes is expected too - an end-user written application for particular problem could be added relatively simple to the general source code script. Initial works on full implementation of graphic user interface for preparing input and analysing output of codes and ability to interrupt and/or continue code running should be upgraded to user-friendly level. (author)

  7. Improvement of MARS code reflood model

    International Nuclear Information System (INIS)

    Hwang, Moonkyu; Chung, Bub-Dong

    2011-01-01

    A specifically designed heat transfer model for the reflood process which normally occurs at low flow and low pressure was originally incorporated in the MARS code. The model is essentially identical to that of the RELAP5/MOD3.3 code. The model, however, is known to have under-estimated the peak cladding temperature (PCT) with earlier turn-over. In this study, the original MARS code reflood model is improved. Based on the extensive sensitivity studies for both hydraulic and wall heat transfer models, it is found that the dispersed flow film boiling (DFFB) wall heat transfer is the most influential process determining the PCT, whereas the interfacial drag model most affects the quenching time through the liquid carryover phenomenon. The model proposed by Bajorek and Young is incorporated for the DFFB wall heat transfer. Both space grid and droplet enhancement models are incorporated. Inverted annular film boiling (IAFB) is modeled by using the original PSI model of the code. The flow transition between the DFFB and IABF, is modeled using the TRACE code interpolation. A gas velocity threshold is also added to limit the top-down quenching effect. Assessment calculations are performed for the original and modified MARS codes for the Flecht-Seaset test and RBHT test. Improvements are observed in terms of the PCT and quenching time predictions in the Flecht-Seaset assessment. In case of the RBHT assessment, the improvement over the original MARS code is found marginal. A space grid effect, however, is clearly seen from the modified version of the MARS code. (author)

  8. Automatic Coding of Short Text Responses via Clustering in Educational Assessment

    Science.gov (United States)

    Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank

    2016-01-01

    Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…

  9. Development of the Log-in Process and the Operation Process for the VHTR-SI Process Dynamic Simulation Code

    International Nuclear Information System (INIS)

    Chang, Jiwoon; Shin, Youngjoon; Kim, Jihwan; Lee, Kiyoung; Lee, Wonjae; Chang, Jonghwa; Youn, Cheung

    2009-01-01

    The VHTR-SI process is a hydrogen production technique by using Sulfur and Iodine. The SI process for a hydrogen production uses a high temperature (about 950 .deg. C) of the He gas which is a cooling material for an energy sources. The Korea Atomic Energy Research Institute Dynamic Simulation Code (KAERI DySCo) is an integration application software that simulates the dynamic behavior of the VHTR-SI process. A dynamic modeling is used to express and model the behavior of the software system over time. The dynamic modeling deals with the control flow of system, the interaction of objects and the order of actions in view of a time and transition by using a sequence diagram and a state transition diagram. In this paper, we present an user log-in process and an operation process for the KAERI DySCo by using a sequence diagram and a state transition diagram

  10. Numerical simulations of inertial confinement fusion hohlraum with LARED-integration code

    International Nuclear Information System (INIS)

    Li Jinghong; Li Shuanggui; Zhai Chuanlei

    2011-01-01

    In the target design of the Inertial Confinement Fusion (ICF) program, it is common practice to apply radiation hydrodynamics code to study the key physical processes happened in ICF process, such as hohlraum physics, radiation drive symmetry, capsule implosion physics in the radiation-drive approach of ICF. Recently, many efforts have been done to develop our 2D integrated simulation capability of laser fusion with a variety of optional physical models and numerical methods. In order to effectively integrate the existing codes and to facilitate the development of new codes, we are developing an object-oriented structured-mesh parallel code-supporting infrastructure, called JASMIN. Based on two-dimensional three-temperature hohlraum physics code LARED-H and two-dimensional multi-group radiative transfer code LARED-R, we develop a new generation two-dimensional laser fusion code under the JASMIN infrastructure, which enable us to simulate the whole process of laser fusion from the laser beams' entrance into the hohlraum to the end of implosion. In this paper, we will give a brief description of our new-generation two-dimensional laser fusion code, named LARED-Integration, especially in its physical models, and present some simulation results of holhraum. (author)

  11. Validation of the TRACR3D code for soil water flow under saturated/unsaturated conditions in three experiments

    International Nuclear Information System (INIS)

    Perkins, B.; Travis, B.; DePoorter, G.

    1985-01-01

    Validation of the TRACR3D code in a one-dimensional form was obtained for flow of soil water in three experiments. In the first experiment, a pulse of water entered a crushed-tuff soil and initially moved under conditions of saturated flow, quickly followed by unsaturated flow. In the second experiment, steady-state unsaturated flow took place. In the final experiment, two slugs of water entered crushed tuff under field conditions. In all three experiments, experimentally measured data for volumetric water content agreed, within experimental errors, with the volumetric water content predicted by the code simulations. The experiments and simulations indicated the need for accurate knowledge of boundary and initial conditions, amount and duration of moisture input, and relevant material properties as input into the computer code. During the validation experiments, limitations on monitoring of water movement in waste burial sites were also noted. 5 references, 34 figures, 9 tables

  12. TASS code topical report. V.1 TASS code technical manual

    International Nuclear Information System (INIS)

    Sim, Suk K.; Chang, W. P.; Kim, K. D.; Kim, H. C.; Yoon, H. Y.

    1997-02-01

    TASS 1.0 code has been developed at KAERI for the initial and reload non-LOCA safety analysis for the operating PWRs as well as the PWRs under construction in Korea. TASS code will replace various vendor's non-LOCA safety analysis codes currently used for the Westinghouse and ABB-CE type PWRs in Korea. This can be achieved through TASS code input modifications specific to each reactor type. The TASS code can be run interactively through the keyboard operation. A simimodular configuration used in developing the TASS code enables the user easily implement new models. TASS code has been programmed using FORTRAN77 which makes it easy to install and port for different computer environments. The TASS code can be utilized for the steady state simulation as well as the non-LOCA transient simulations such as power excursions, reactor coolant pump trips, load rejections, loss of feedwater, steam line breaks, steam generator tube ruptures, rod withdrawal and drop, and anticipated transients without scram (ATWS). The malfunctions of the control systems, components, operator actions and the transients caused by the malfunctions can be easily simulated using the TASS code. This technical report describes the TASS 1.0 code models including reactor thermal hydraulic, reactor core and control models. This TASS code models including reactor thermal hydraulic, reactor core and control models. This TASS code technical manual has been prepared as a part of the TASS code manual which includes TASS code user's manual and TASS code validation report, and will be submitted to the regulatory body as a TASS code topical report for a licensing non-LOCA safety analysis for the Westinghouse and ABB-CE type PWRs operating and under construction in Korea. (author). 42 refs., 29 tabs., 32 figs

  13. A Realistic Model under which the Genetic Code is Optimal

    NARCIS (Netherlands)

    Buhrman, H.; van der Gulik, P.T.S.; Klau, G.W.; Schaffner, C.; Speijer, D.; Stougie, L.

    2013-01-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By

  14. Discrete processes modelling and geometry description in RTS and T code

    International Nuclear Information System (INIS)

    Degtyarev, I.I.; Liashenko, O.A.; Lokhovitskii, A.E.; Yazynin, I.A.; Belyakov-Bodin, V.I.; Blokhin, A.I.

    2001-01-01

    This paper describes the recent version of the RTS and T code system. RTS and T performs detailed simulations of many types of particles transport in complex 3D geometries in the energy range from a part of eV up to 20 TeV. A description of the main processes is given. (author)

  15. A Processing Approach to the Dual Coding Hypothesis

    Science.gov (United States)

    Kosslyn, Stephen M.; And Others

    1976-01-01

    Investigates whether imagery and verbal encoding use different processing mechanisms and attempts to discover whether the processes underlying the use of imagery to retain words are also involved in like-modality perception. (Author/RK)

  16. HCPCS Coding: An Integral Part of Your Reimbursement Strategy.

    Science.gov (United States)

    Nusgart, Marcia

    2013-12-01

    The first step to a successful reimbursement strategy is to ensure that your wound care product has the most appropriate Healthcare Common Procedure Coding System (HCPCS) code (or billing) for your product. The correct HCPCS code plays an essential role in patient access to new and existing technologies. When devising a strategy to obtain a HCPCS code for its product, companies must consider a number of factors as follows: (1) Has the product gone through the Food and Drug Administration (FDA) regulatory process or does it need to do so? Will the FDA code designation impact which HCPCS code will be assigned to your product? (2) In what "site of service" do you intend to market your product? Where will your customers use the product? Which coding system (CPT ® or HCPCS) applies to your product? (3) Does a HCPCS code for a similar product already exist? Does your product fit under the existing HCPCS code? (4) Does your product need a new HCPCS code? What is the linkage, if any, between coding, payment, and coverage for the product? Researchers and companies need to start early and place the same emphasis on a reimbursement strategy as it does on a regulatory strategy. Your reimbursement strategy staff should be involved early in the process, preferably during product research and development and clinical trial discussions.

  17. Performance analysis of spectral-phase-encoded optical code-division multiple-access system regarding the incorrectly decoded signal as a nonstationary random process

    Science.gov (United States)

    Yan, Meng; Yao, Minyu; Zhang, Hongming

    2005-11-01

    The performance of a spectral-phase-encoded (SPE) optical code-division multiple-access (OCDMA) system is analyzed. Regarding the incorrectly decoded signal (IDS) as a nonstationary random process, we derive a novel probability distribution for it. The probability distribution of the IDS is considered a chi-squared distribution with degrees of freedom r=1, which is more reasonable and accurate than in previous work. The bit error rate (BER) of an SPE OCDMA system under multiple-access interference is evaluated. Numerical results show that the system can sustain very low BER even when there are multiple simultaneous users, and as the code length becomes longer or the initial pulse becomes shorter, the system performs better.

  18. On transform coding tools under development for VP10

    Science.gov (United States)

    Parker, Sarah; Chen, Yue; Han, Jingning; Liu, Zoe; Mukherjee, Debargha; Su, Hui; Wang, Yongzhe; Bankoski, Jim; Li, Shunyao

    2016-09-01

    Google started the WebM Project in 2010 to develop open source, royaltyfree video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec, VP10, that achieves at least a generational improvement in coding efficiency over VP9. Starting from VP9, a set of new experimental coding tools have already been added to VP10 to achieve decent coding gains. Subsequently, Google joined a consortium of major tech companies called the Alliance for Open Media to jointly develop a new codec AV1. As a result, the VP10 effort is largely expected to merge with AV1. In this paper, we focus primarily on new tools in VP10 that improve coding of the prediction residue using transform coding techniques. Specifically, we describe tools that increase the flexibility of available transforms, allowing the codec to handle a more diverse range or residue structures. Results are presented on a standard test set.

  19. A Coding System for Qualitative Studies of the Information-Seeking Process in Computer Science Research

    Science.gov (United States)

    Moral, Cristian; de Antonio, Angelica; Ferre, Xavier; Lara, Graciela

    2015-01-01

    Introduction: In this article we propose a qualitative analysis tool--a coding system--that can support the formalisation of the information-seeking process in a specific field: research in computer science. Method: In order to elaborate the coding system, we have conducted a set of qualitative studies, more specifically a focus group and some…

  20. Validation of the VTT's reactor physics code system

    International Nuclear Information System (INIS)

    Tanskanen, A.

    1998-01-01

    At VTT Energy several international reactor physics codes and nuclear data libraries are used in a variety of applications. The codes and libraries are under constant development and every now and then new updated versions are released, which are taken in use as soon as they have been validated at VTT Energy. The primary aim of the validation is to ensure that the code works properly, and that it can be used correctly. Moreover, the applicability of the codes and libraries are studied in order to establish their advantages and weak points. The capability of generating program-specific nuclear data for different reactor physics codes starting from the same evaluated data is sometimes of great benefit. VTT Energy has acquired a nuclear data processing system based on the NJOY-94.105 and TRANSX-2.15 processing codes. The validity of the processing system has been demonstrated by generating pointwise (MCNP) and groupwise (ANISN) temperature-dependent cross section sets for the benchmark calculations of the Doppler coefficient of reactivity. At VTT Energy the KENO-VI three-dimensional Monte Carlo code is used in criticality safety analyses. The KENO-VI code and the 44GROUPNDF5 data library have been validated at VTT Energy against the ZR-6 and LR-0 critical experiments. Burnup Credit refers to the reduction in reactivity of burned nuclear fuel due to the change in composition during irradiation. VTT Energy has participated in the calculational VVER-440 burnup credit benchmark in order to validate criticality safety calculation tools. (orig.)

  1. The GC computer code for flow sheet simulation of pyrochemical processing of spent nuclear fuels

    International Nuclear Information System (INIS)

    Ahluwalia, R.K.; Geyer, H.K.

    1996-01-01

    The GC computer code has been developed for flow sheet simulation of pyrochemical processing of spent nuclear fuel. It utilizes a robust algorithm SLG for analyzing simultaneous chemical reactions between species distributed across many phases. Models have been developed for analysis of the oxide fuel reduction process, salt recovery by electrochemical decomposition of lithium oxide, uranium separation from the reduced fuel by electrorefining, and extraction of fission products into liquid cadmium. The versatility of GC is demonstrated by applying the code to a flow sheet of current interest

  2. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Fossorier Marc

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope -ary phase shift key ( -PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded -PSK signaling (with . Then, it is extended to include coded -PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded -PSK signaling performs 3.1 to 5.2 dB better than uncoded -PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  3. Code of Practice on Radiation Protection in the Mining and Processing of Mineral Sands (1982) (Western Australia)

    International Nuclear Information System (INIS)

    1982-01-01

    This Code establishes radiation safety practices for the mineral sands industry in Western Australia. The Code prescribes, not only for operators and managers of mines and processing plants but for their employees as well, certain duties designed to ensure that radiation exposure is kept as low as reasonably practicable. The Code further provides for the management of wastes, again with a view to keeping contaminant concentrations and dose rates within specified levels. Finally, provision is made for the rehabilitation of those sites in which mining or processing operations have ceased by restoring the areas to designated average radiation levels. (NEA) [fr

  4. Classification and coding of commercial fishing injuries by work processes: an experience in the Danish fresh market fishing industry

    DEFF Research Database (Denmark)

    Jensen, Olaf Chresten; Stage, Søren; Noer, Preben

    2005-01-01

    BACKGROUND: Work-related injuries in commercial fishing are of concern internationally. To better identify the causes of injury, this study coded occupational injuries by working processes in commercial fishing for fresh market fish. METHODS: A classification system of the work processes was deve......BACKGROUND: Work-related injuries in commercial fishing are of concern internationally. To better identify the causes of injury, this study coded occupational injuries by working processes in commercial fishing for fresh market fish. METHODS: A classification system of the work processes...... to working with the gear and nets vary greatly in the different fishing methods. Coding of the injuries to the specific working processes allows for targeted prevention efforts....

  5. [Quality management and strategic consequences of assessing documentation and coding under the German Diagnostic Related Groups system].

    Science.gov (United States)

    Schnabel, M; Mann, D; Efe, T; Schrappe, M; V Garrel, T; Gotzen, L; Schaeg, M

    2004-10-01

    The introduction of the German Diagnostic Related Groups (D-DRG) system requires redesigning administrative patient management strategies. Wrong coding leads to inaccurate grouping and endangers the reimbursement of treatment costs. This situation emphasizes the roles of documentation and coding as factors of economical success. The aims of this study were to assess the quantity and quality of initial documentation and coding (ICD-10 and OPS-301) and find operative strategies to improve efficiency and strategic means to ensure optimal documentation and coding quality. In a prospective study, documentation and coding quality were evaluated in a standardized way by weekly assessment. Clinical data from 1385 inpatients were processed for initial correctness and quality of documentation and coding. Principal diagnoses were found to be accurate in 82.7% of cases, inexact in 7.1%, and wrong in 10.1%. Effects on financial returns occurred in 16%. Based on these findings, an optimized, interdisciplinary, and multiprofessional workflow on medical documentation, coding, and data control was developed. Workflow incorporating regular assessment of documentation and coding quality is required by the DRG system to ensure efficient accounting of hospital services. Interdisciplinary and multiprofessional cooperation is recognized to be an important factor in establishing an efficient workflow in medical documentation and coding.

  6. Under-coding of secondary conditions in coded hospital health data: Impact of co-existing conditions, death status and number of codes in a record.

    Science.gov (United States)

    Peng, Mingkai; Southern, Danielle A; Williamson, Tyler; Quan, Hude

    2017-12-01

    This study examined the coding validity of hypertension, diabetes, obesity and depression related to the presence of their co-existing conditions, death status and the number of diagnosis codes in hospital discharge abstract database. We randomly selected 4007 discharge abstract database records from four teaching hospitals in Alberta, Canada and reviewed their charts to extract 31 conditions listed in Charlson and Elixhauser comorbidity indices. Conditions associated with the four study conditions were identified through multivariable logistic regression. Coding validity (i.e. sensitivity, positive predictive value) of the four conditions was related to the presence of their associated conditions. Sensitivity increased with increasing number of diagnosis code. Impact of death on coding validity is minimal. Coding validity of conditions is closely related to its clinical importance and complexity of patients' case mix. We recommend mandatory coding of certain secondary diagnosis to meet the need of health research based on administrative health data.

  7. Comparative performance evaluation of transform coding in image pre-processing

    Science.gov (United States)

    Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha

    2017-07-01

    We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.

  8. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  9. Synthesizing Certified Code

    OpenAIRE

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach for formally demonstrating software quality. Its basic idea is to require code producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates that can be checked independently. Since code certification uses the same underlying technology as program verification, it requires detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding annotations to th...

  10. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Marc Fossorier

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope M-ary phase shift key (M-PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded M-PSK signaling (with M=2k. Then, it is extended to include coded M-PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded M-PSK signaling performs 3.1 to 5.2 dB better than uncoded M-PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  11. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  12. User's manual for the Oak Ridge Tokamak Transport Code

    International Nuclear Information System (INIS)

    Munro, J.K.; Hogan, J.T.; Howe, H.C.; Arnurius, D.E.

    1977-02-01

    A one-dimensional tokamak transport code is described which simulates a plasma discharge using a fluid model which includes power balances for electrons and ions, conservation of mass, and Maxwell's equations. The modular structure of the code allows a user to add models of various physical processes which can modify the discharge behavior. Such physical processes treated in the version of the code described here include effects of plasma transport, neutral gas transport, impurity diffusion, and neutral beam injection. Each process can be modeled by a parameterized analytic formula or at least one detailed numerical calculation. The program logic of each module is presented, followed by detailed descriptions of each subroutine used by the module. The physics underlying the models is only briefly summarized. The transport code was written in IBM FORTRAN-IV and implemented on IBM 360/370 series computers at the Oak Ridge National Laboratory and on the CDC 7600 computers of the Magnetic Fusion Energy (MFE) Computing Center of the Lawrence Livermore Laboratory. A listing of the current reference version is provided on accompanying microfiche

  13. Development of hydraulic analysis code for optimizing thermo-chemical is process reactors

    International Nuclear Information System (INIS)

    Terada, Atsuhiko; Hino, Ryutaro; Hirayama, Toshio; Nakajima, Norihiro; Sugiyama, Hitoshi

    2007-01-01

    The Japan Atomic Energy Agency has been conducting study on thermochemical IS process for water splitting hydrogen production. Based on the test results and know-how obtained through the bench-scale test, a pilot test plant, which has a hydrogen production performance of 30 Nm 3 /h, is being designed conceptually as the next step of the IS process development. In design of the IS pilot plant, it is important to make chemical reactors compact with high performance from the viewpoint of plant cost reduction. A new hydraulic analytical code has been developed for optimizing mixing performance of multi-phase flow involving chemical reactions especially in the Bunsen reactor. Complex flow pattern with gas-liquid chemical interaction involving flow instability will be characterized in the Bunsen reactor. Preliminary analytical results obtained with above mentioned code, especially flow patterns induced by swirling flow agreed well with that measured by water experiments, which showed vortex breakdown pattern in a simplified Bunsen reactor. (author)

  14. Computer codes for safety analysis

    International Nuclear Information System (INIS)

    Holland, D.F.

    1986-11-01

    Computer codes for fusion safety analysis have been under development in the United States for about a decade. This paper will discuss five codes that are currently under development by the Fusion Safety Program. The purpose and capability of each code will be presented, a sample given, followed by a discussion of the present status and future development plans

  15. The Cortical Organization of Speech Processing: Feedback Control and Predictive Coding the Context of a Dual-Stream Model

    Science.gov (United States)

    Hickok, Gregory

    2012-01-01

    Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…

  16. ANDREA: Advanced nodal diffusion code for reactor analysis

    International Nuclear Information System (INIS)

    Belac, J.; Josek, R.; Klecka, L.; Stary, V.; Vocka, R.

    2005-01-01

    A new macro code is being developed at NRI which will allow coupling of the advanced thermal-hydraulics model with neutronics calculations as well as efficient use in core loading pattern optimization process. This paper describes the current stage of the macro code development. The core simulator is based on the nodal expansion method, Helios lattice code is used for few group libraries preparation. Standard features such as pin wise power reconstruction and feedback iterations on critical control rod position, boron concentration and reactor power are implemented. A special attention is paid to the system and code modularity in order to enable flexible and easy implementation of new features in future. Precision of the methods used in the macro code has been verified on available benchmarks. Testing against Temelin PWR operational data is under way (Authors)

  17. Remote-Handled Transuranic Content Codes

    International Nuclear Information System (INIS)

    2001-01-01

    The Remote-Handled Transuranic (RH-TRU) Content Codes (RH-TRUCON) document represents the development of a uniform content code system for RH-TRU waste to be transported in the 72-Bcask. It will be used to convert existing waste form numbers, content codes, and site-specific identification codes into a system that is uniform across the U.S. Department of Energy (DOE) sites.The existing waste codes at the sites can be grouped under uniform content codes without any lossof waste characterization information. The RH-TRUCON document provides an all-encompassing description for each content code and compiles this information for all DOE sites. Compliance with waste generation, processing, and certification procedures at the sites (outlined in this document foreach content code) ensures that prohibited waste forms are not present in the waste. The content code gives an overall description of the RH-TRU waste material in terms of processes and packaging, as well as the generation location. This helps to provide cradle-to-grave traceability of the waste material so that the various actions required to assess its qualification as payload for the 72-B cask can be performed. The content codes also impose restrictions and requirements on the manner in which a payload can be assembled. The RH-TRU Waste Authorized Methods for Payload Control (RH-TRAMPAC), Appendix 1.3.7 of the 72-B Cask Safety Analysis Report (SAR), describes the current governing procedures applicable for the qualification of waste as payload for the 72-B cask. The logic for this classification is presented in the 72-B Cask SAR. Together, these documents (RH-TRUCON, RH-TRAMPAC, and relevant sections of the 72-B Cask SAR) present the foundation and justification for classifying RH-TRU waste into content codes. Only content codes described in thisdocument can be considered for transport in the 72-B cask. Revisions to this document will be madeas additional waste qualifies for transport. Each content code uniquely

  18. Design and simulation of material-integrated distributed sensor processing with a code-based agent platform and mobile multi-agent systems.

    Science.gov (United States)

    Bosse, Stefan

    2015-02-16

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  19. Design and Simulation of Material-Integrated Distributed Sensor Processing with a Code-Based Agent Platform and Mobile Multi-Agent Systems

    Directory of Open Access Journals (Sweden)

    Stefan Bosse

    2015-02-01

    Full Text Available Multi-agent systems (MAS can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  20. PCS a code system for generating production cross section libraries

    International Nuclear Information System (INIS)

    Cox, L.J.

    1997-01-01

    This document outlines the use of the PCS Code System. It summarizes the execution process for generating FORMAT2000 production cross section files from FORMAT2000 reaction cross section files. It also describes the process of assembling the ASCII versions of the high energy production files made from ENDL and Mark Chadwick's calculations. Descriptions of the function of each code along with its input and output and use are given. This document is under construction. Please submit entries, suggestions, questions, and corrections to (ljc at sign llnl.gov) 3 tabs

  1. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    International Nuclear Information System (INIS)

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-01-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096 3 effective resolution and 16 GPUs with 8192 3 effective resolution, respectively.

  2. RELAP-7 Code Assessment Plan and Requirement Traceability Matrix

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Junsoo; Choi, Yong-joon; Smith, Curtis L.

    2016-10-01

    The RELAP-7, a safety analysis code for nuclear reactor system, is under development at Idaho National Laboratory (INL). Overall, the code development is directed towards leveraging the advancements in computer science technology, numerical solution methods and physical models over the last decades. Recently, INL has also been putting an effort to establish the code assessment plan, which aims to ensure an improved final product quality through the RELAP-7 development process. The ultimate goal of this plan is to propose a suitable way to systematically assess the wide range of software requirements for RELAP-7, including the software design, user interface, and technical requirements, etc. To this end, we first survey the literature (i.e., international/domestic reports, research articles) addressing the desirable features generally required for advanced nuclear system safety analysis codes. In addition, the V&V (verification and validation) efforts as well as the legacy issues of several recently-developed codes (e.g., RELAP5-3D, TRACE V5.0) are investigated. Lastly, this paper outlines the Requirement Traceability Matrix (RTM) for RELAP-7 which can be used to systematically evaluate and identify the code development process and its present capability.

  3. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  4. Designing the User Interface COBRET under Windows to Carry out Pre- and Post-Processing for the Programs COBRA-RERTR and PARET

    International Nuclear Information System (INIS)

    Ghazi, N.; Monther, A.; Hainoun, A.

    2004-01-01

    In the frame work of testing, evaluation and application of computer codes in the design and safety analysis of research reactors, the dynamic code PARET and the thermal hydraulic code COBRA-RERTR have been adopted. In order to run the codes under windows and to support the user by pre- and post processing, the user interface program COBRET has been developed in the programming language Visual Basic 6 and the data used by it are organized and stored in a relational database in MS Access, an integral part of the software package, MS Office. The interface works in the environment of the Windows operating system and utilizes its graphics as well as other possibilities. It consists of Pre and Post processor. The pre processor deals with the interactive preparation of the input files for PARET and COBRA codes. It supports the user with an automatic check in routine for detecting logical input errors in addition to many direct helps during the multi mode input process. This process includes an automatic branching according to the selected control parameters that depends on the simulation modes of the considered physical problem. The post processor supports the user with graphical tool to present the time and axial distribution of the system variables that consist of many neutronics and thermal hydraulic parameters of the reactor system like neutron flux, reactivity, temperatures, flow rate, pressure and void distribution. (authors)

  5. Designing the user interface COBRET under windows to carry out pre- and post-processing for the programs COBRA-RERTR and PARET

    International Nuclear Information System (INIS)

    Hainoun, A.; Monther, A.; Ghazi, N.

    2004-01-01

    In this framework of testing, evaluation and application of computer codes in the design studies and safety analysis of research reactors, the dynamic code PARET and the thermal hydraulic code COBRA-RERTR have been adopted. In order to run the codes under windows and to support the user by pre- and post processing. The user interface program COBRET has been developed in the programming language visual basic 6 and the data used by it are organized and stored in a relational database in MS Access, and integral port of the software package, MS Office. The interface works in the environment of the Windows operating system and utilizes its graphics as well as other possibilities. It consists of Pre and Post processor. The pre processor deals with the interactive preparation of the input files for PARET and COBRA codes. it supports the user with an automatic check in routine for detecting logical input errors in addition to many direct helps during the multi mode input process. This process includes an automatic branching according to the selected control parameters that depends on the simulation modes of the considered physical problem. The post processor supports the user with graphical tool to present the time and axial distribution of the system variables that consist of many neutronics and thermal hydraulic parameters o the reactor system like neutron flux, reactivity, temperatures, flow rate, pressure and void distribution (author)

  6. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  7. LACAN Code for global simulation of SILVA laser isotope separation process

    International Nuclear Information System (INIS)

    Quaegebeur, J.P.; Goldstein, S.

    1991-01-01

    Functions used for the definition of a SILVA separator require quite a lot of dimensional and operating parameters. Sizing a laser isotope separation plant needs the determination of these parameters for optimization. In the LACAN simulation code, each elementary physical process is described by simplified models. An example is given for a uranium isotope separation plant whose separation power is optimized with 6 parameters [fr

  8. Enabling Ethical Code Embeddedness in Construction Organizations: A Review of Process Assessment Approach.

    Science.gov (United States)

    Oladinrin, Olugbenga Timo; Ho, Christabel Man-Fong

    2016-08-01

    Several researchers have identified codes of ethics (CoEs) as tools that stimulate positive ethical behavior by shaping the organisational decision-making process, but few have considered the information needed for code implementation. Beyond being a legal and moral responsibility, ethical behavior needs to become an organisational priority, which requires an alignment process that integrates employee behavior with the organisation's ethical standards. This paper discusses processes for the responsible implementation of CoEs based on an extensive review of the literature. The internationally recognized European Foundation for Quality Management Excellence Model (EFQM model) is proposed as a suitable framework for assessing an organisation's ethical performance, including CoE embeddedness. The findings presented herein have both practical and research implications. They will encourage construction practitioners to shift their attention from ethical policies to possible enablers of CoE implementation and serve as a foundation for further research on ethical performance evaluation using the EFQM model. This is the first paper to discuss the model's use in the context of ethics in construction practice.

  9. The MC21 Monte Carlo Transport Code

    International Nuclear Information System (INIS)

    Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H

    2007-01-01

    MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities

  10. Detected-jump-error-correcting quantum codes, quantum error designs, and quantum computation

    International Nuclear Information System (INIS)

    Alber, G.; Mussinger, M.; Beth, Th.; Charnes, Ch.; Delgado, A.; Grassl, M.

    2003-01-01

    The recently introduced detected-jump-correcting quantum codes are capable of stabilizing qubit systems against spontaneous decay processes arising from couplings to statistically independent reservoirs. These embedded quantum codes exploit classical information about which qubit has emitted spontaneously and correspond to an active error-correcting code embedded in a passive error-correcting code. The construction of a family of one-detected-jump-error-correcting quantum codes is shown and the optimal redundancy, encoding, and recovery as well as general properties of detected-jump-error-correcting quantum codes are discussed. By the use of design theory, multiple-jump-error-correcting quantum codes can be constructed. The performance of one-jump-error-correcting quantum codes under nonideal conditions is studied numerically by simulating a quantum memory and Grover's algorithm

  11. Development of a two-dimensional simulation code (koad) including atomic processes for beam direct energy conversion

    International Nuclear Information System (INIS)

    Yamamoto, Y.; Yoshikawa, K.; Hattori, Y.

    1987-01-01

    A two-dimensional simulation code for the beam direct energy conversion called KVAD (Kyoto University Advanced DART) including various loss mechanisms has been developed, and shown excellent agreement with the authors' experiments using the He + beams. The beam direct energy converter (BDC) is the device to recover the kinetic energy of unneutralized ions in the neutral beam injection (NBI) system directly into electricity. The BDC is very important and essential not only to the improvements of NBI system efficiency, but also to the relaxation of high heat flux problems on the beam dump with increase of injection energies. So far no simulation code could have successfully predicted BDC experimental results. The KUAD code applies, an optimized algorithm for vector processing, the finite element method (FEM) for potential calculation, and a semi-automatic method for spatial segmentations. Since particle trajectories in the KVAD code are analytically solved, very high speed tracings of the particle could be achieved by introducing an adjacent element matrix to identify the neighboring triangle elements and electrodes. Ion space charges are also analytically calculated by the Cloud in Cell (CIC) method, as well as electron space charges. Power losses due to atomic processes can be also evaluated in the KUAD code

  12. Code it rite the first time : automated invoice processing solution designed to ensure validity to field ticket coding

    Energy Technology Data Exchange (ETDEWEB)

    Chandler, G.

    2010-03-15

    An entrepreneur who ran 55 rigs for a major oilfield operator in Calgary has developed a solution for the oil industry that reduces field ticketing errors from 40 per cent to almost none. The Code-Rite not only simplifies field ticketing but can eliminate weeks of trying to balance authorization for expenditure (AFE) numbers. A service provider who wants a field ticket signed for billing purposes following a service call to a well site receives all pertinent information on a barcode that includes AFE number, location, routing, approval authority and mailing address. Attaching the label to the field ticket provides all the invoicing information needed. This article described the job profile, education and life experiences and opportunities that led the innovator to develop this technology that solves an industry-wide problem. Code-Rite is currently being used by 3 large upstream oil and gas operators and plans are underway to automate the entire invoice processing system. 1 fig.

  13. Implementation of decommissioning materials conditional clearance process to the OMEGA calculation code

    International Nuclear Information System (INIS)

    Zachar, Matej; Necas, Vladimir; Daniska, Vladimir

    2011-01-01

    The activities performed during nuclear installation decommissioning process inevitably lead to the production of large amount of radioactive material to be managed. Significant part of materials has such low radioactivity level that allows them to be released to the environment without any restriction for further use. On the other hand, for materials with radioactivity slightly above the defined unconditional clearance level, there is a possibility to release them conditionally for a specific purpose in accordance with developed scenario assuring that radiation exposure limits for population not to be exceeded. The procedure of managing such decommissioning materials, mentioned above, could lead to recycling and reuse of more solid materials and to save the radioactive waste repository volume. In the paper an a implementation of the process of conditional release to the OMEGA Code is analyzed in details; the Code is used for calculation of decommissioning parameters. The analytical approach in the material parameters assessment, firstly, assumes a definition of radiological limit conditions, based on the evaluation of possible scenarios for conditionally released materials, and their application to appropriate sorter type in existing material and radioactivity flow system. Other calculation procedures with relevant technological or economical parameters, mathematically describing e.g. final radiation monitoring or transport outside the locality, are applied to the OMEGA Code in the next step. Together with limits, new procedures creating independent material stream allow evaluation of conditional material release process during decommissioning. Model calculations evaluating various scenarios with different input parameters and considering conditional release of materials to the environment are performed to verify the implemented methodology. Output parameters and results of the model assessment are presented, discussed and conduced in the final part of the paper

  14. Homological stabilizer codes

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  15. From chemical metabolism to life: the origin of the genetic coding process

    Directory of Open Access Journals (Sweden)

    Antoine Danchin

    2017-06-01

    Full Text Available Looking for origins is so much rooted in ideology that most studies reflect opinions that fail to explore the first realistic scenarios. To be sure, trying to understand the origins of life should be based on what we know of current chemistry in the solar system and beyond. There, amino acids and very small compounds such as carbon dioxide, dihydrogen or dinitrogen and their immediate derivatives are ubiquitous. Surface-based chemical metabolism using these basic chemicals is the most likely beginning in which amino acids, coenzymes and phosphate-based small carbon molecules were built up. Nucleotides, and of course RNAs, must have come to being much later. As a consequence, the key question to account for life is to understand how chemical metabolism that began with amino acids progressively shaped into a coding process involving RNAs. Here I explore the role of building up complementarity rules as the first information-based process that allowed for the genetic code to emerge, after RNAs were substituted to surfaces to carry over the basic metabolic pathways that drive the pursuit of life.

  16. Globalization of ASME Nuclear Codes and Standards

    International Nuclear Information System (INIS)

    Swayne, Rick; Erler, Bryan A.

    2006-01-01

    With the globalization of the nuclear industry, it is clear that the reactor suppliers are based in many countries around the world (such as United States, France, Japan, Canada, South Korea, South Africa) and they will be marketing their reactors to many countries around the world (such as US, China, South Korea, France, Canada, Finland, Taiwan). They will also be fabricating their components in many different countries around the world. With this situation, it is clear that the requirements of ASME Nuclear Codes and Standards need to be adjusted to accommodate the regulations, fabricating processes, and technology of various countries around the world. It is also very important for the American Society of Mechanical Engineers (ASME) to be able to assure that products meeting the applicable ASME Code requirements will provide the same level of safety and quality assurance as those products currently fabricated under the ASME accreditation process. To do this, many countries are in the process of establishing or changing their regulations, and it is important for ASME to interface with the appropriate organizations in those countries, in order to ensure there is effective use of ASME Codes and standards around the world. (authors)

  17. Calculation code PULCO for Purex process in pulsed column

    International Nuclear Information System (INIS)

    Gonda, Kozo; Matsuda, Teruo

    1982-03-01

    The calculation code PULCO, which can simulate the Purex process using a pulsed column as an extractor, has been developed. The PULCO is based on the fundamental concept of mass transfer that the mass transfer within a pulsed column occurs through the interface of liquid drops and continuous phase fluid, and is the calculation code different from conventional ones, by which various phenomena such as the generation of liquid drops, their rising and falling, and the unification of liquid drops actually occurring in a pulsed column are exactly reflected and can be correctly simulated. In the PULCO, the actually measured values of the fundamental quantities representing the extraction behavior of liquid drops in a pulsed column are incorporated, such as the mass transfer coefficient of each component, the diameter and velocity of liquid drops in a pulsed column, the holdup of dispersed phase, and axial turbulent flow diffusion coefficient. The verification of the results calculated with the PULCO was carried out by installing a pulsed column of 50 mm inside diameter and 2 m length with 40 plate stage in a glove box for unirradiated uranium-plutonium mixed system. The results of the calculation and test were in good agreement, and the validity of the PULCO was confirmed. (Kako, I.)

  18. Thermal hydraulic codes for LWR safety analysis - present status and future perspective

    Energy Technology Data Exchange (ETDEWEB)

    Staedtke, H. [Commission of the European Union, Ispra (Italy)

    1997-07-01

    The aim of the present paper is to give a review on the current status and future perspective of present best-estimate Thermal Hydraulic codes. Reference is made to internationally well-established codes which have reached a certain state of maturity. The first part of the paper deals with the common basic code features with respect to the physical modelling and their numerical methods used to describe complex two-phase flow and heat transfer processes. The general predictive capabilities are summarized identifying some remaining code deficiencies and their underlying limitations. The second part discusses various areas including physical modelling, numerical techniques and informatic structure where the codes could be substantially improved.

  19. Thermal hydraulic codes for LWR safety analysis - present status and future perspective

    International Nuclear Information System (INIS)

    Staedtke, H.

    1997-01-01

    The aim of the present paper is to give a review on the current status and future perspective of present best-estimate Thermal Hydraulic codes. Reference is made to internationally well-established codes which have reached a certain state of maturity. The first part of the paper deals with the common basic code features with respect to the physical modelling and their numerical methods used to describe complex two-phase flow and heat transfer processes. The general predictive capabilities are summarized identifying some remaining code deficiencies and their underlying limitations. The second part discusses various areas including physical modelling, numerical techniques and informatic structure where the codes could be substantially improved

  20. Contractual Penalty and the Right to Payment for Delays Caused by Force Majeure in Czech Civil Law under the New Civil Code

    Directory of Open Access Journals (Sweden)

    Janku Martin

    2015-12-01

    Full Text Available In the context of the conclusion of contracts between entrepreneurs under the Czech Civil Code, it is a relatively common arrangement that the parties disclaim any and all liability for damage arising from non-compliance with contractual obligations, if they can prove that this failure was due to an obstacle independent of their will. This circumstance excluding liability for the damage is called force majeure by the theory. In many countries this circumstance is ruled upon directly by the legislation (höhere Gewalt, vis major. The Czech regulations represented by the new Civil Code of 2012 (CivC, however, contains only a framework provision that mentions discharging reasons. The paper deals with the – rather disputable – issue that the force majeure does not affect the obligation to pay a contractual penalty under the new rules of the CivC. It should be therefore reflected in the arrangements for contractual penalties inter partes. To this effect the paper analyses the concepts of contractual penalties and force majeure in civil law legislation. Afterwards it compares their mutual relationship and impact on the obligations of the Contracting Parties. Finally, it draws recommendations for practice from the perspective of the contracting process.

  1. Opening up codings?

    DEFF Research Database (Denmark)

    Steensig, Jakob; Heinemann, Trine

    2015-01-01

    doing formal coding and when doing more “traditional” conversation analysis research based on collections. We are more wary, however, of the implication that coding-based research is the end result of a process that starts with qualitative investigations and ends with categories that can be coded...

  2. Hybrid digital-analog coding with bandwidth expansion for correlated Gaussian sources under Rayleigh fading

    Science.gov (United States)

    Yahampath, Pradeepa

    2017-12-01

    Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.

  3. Restructuring of the driver for TASS/SMR, SMART analysis code

    International Nuclear Information System (INIS)

    Lee, G. H.; Kim, S. H.; Whang, Y. D.; Kim, H. C.

    2003-01-01

    TASS/SMR is under development by implementing the newly developed thermal hydraulic models for the integral reactor, SMART. The data structure of TASS utilizes external files, which could result in difficulties for the code maintenance and an easy understanding of the code. The driver for TASS/SMR was rewritten for the use of Fortran 90 advanced features, such as dynamic memory management and user defined derived type. The input structure of TASS changed from the interactive method to card numbered system. Also the output process of TASS reformed by removing the utility program. The validation of the TASS/SMR driver was performed by the comparison of results. The use of the revised TASS/SMR driver is expected to accelerate code development process as well as improve user friendliness

  4. Establishment of Technical Collaboration basis between Korea and France for the development of severe accident assessment computer code under high burnup condition

    International Nuclear Information System (INIS)

    Kim, H. D.; Kim, D. H.; Park, S. Y.; Park, J. H.

    2005-10-01

    This project was performed by KAERI in the frame of construction of the international cooperative basis on the nuclear energy. This was supported from MOST under the title of 'Establishment of Technical Collaboration basis between Korea and France for the development of severe accident assessment computer code under high burn up condition'. The current operating NPP are converting the burned fuel to the wasted fuel after burn up of 40 GWD/MTU. But in Korea, burn up of more than 60 GWD/MTU will be expected because of the high fuel efficiency but also cost saving for storing the wasted fuel safely. The domestic research for the purpose of developing the fuel and the cladding that can be used under the high burn up condition up to 100 GWD/MTU is in progress now. But the current computer code adopts the model and the data that are valid only up to the 40 GWD/MTU at most. Therefore the current model could not take into account the phenomena that may cause differences in the fission product release behavior or in the core damage process due to the high burn up operation (more than 40 GWD/MTU). To evaluate the safety of the NPP with the high burn up fuel, the improvement of current severe accident code against the high burn up condition is an important research item. Also it should start without any delay. Therefore, in this study, an expert group was constructed to establish the research basis for the severe accident under high burn up conditions. From this expert group, the research items regarding the high burn up condition were selected and identified through discussion and technical seminars. Based on these selected items, the meeting between IRSN and KAERI to find out the cooperative research items on the severe accident under the high burn up condition was held in the IRSN headquater in Paris. After the meeting, KAERI and IRSN agreed to cooperate with each other on the selected items, and to co-host the international seminar, and to develop the model and to

  5. A Fast MHD Code for Gravitationally Stratified Media using Graphical Processing Units: SMAUG

    Science.gov (United States)

    Griffiths, M. K.; Fedun, V.; Erdélyi, R.

    2015-03-01

    Parallelization techniques have been exploited most successfully by the gaming/graphics industry with the adoption of graphical processing units (GPUs), possessing hundreds of processor cores. The opportunity has been recognized by the computational sciences and engineering communities, who have recently harnessed successfully the numerical performance of GPUs. For example, parallel magnetohydrodynamic (MHD) algorithms are important for numerical modelling of highly inhomogeneous solar, astrophysical and geophysical plasmas. Here, we describe the implementation of SMAUG, the Sheffield Magnetohydrodynamics Algorithm Using GPUs. SMAUG is a 1-3D MHD code capable of modelling magnetized and gravitationally stratified plasma. The objective of this paper is to present the numerical methods and techniques used for porting the code to this novel and highly parallel compute architecture. The methods employed are justified by the performance benchmarks and validation results demonstrating that the code successfully simulates the physics for a range of test scenarios including a full 3D realistic model of wave propagation in the solar atmosphere.

  6. User's manual for BINIAC: A computer code to translate APET bins

    International Nuclear Information System (INIS)

    Gough, S.T.

    1994-03-01

    This report serves as the user's manual for the FORTRAN code BINIAC. BINIAC is a utility code designed to format the output from the Defense Waste Processing Facility (DWPF) Accident Progression Event Tree (APET) methodology. BINIAC inputs the accident progression bins from the APET methodology, converts the frequency from occurrences per hour to occurrences per year, sorts the progression bins, and converts the individual dimension character codes into facility attributes. Without the use of BINIAC, this process would be done manually at great time expense. BINIAC was written under the quality assurance control of IQ34 QAP IV-1, revision 0, section 4.1.4. Configuration control is established through the use of a proprietor and a cognizant users list

  7. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  8. Studying the co-evolution of production and test code in open source and industrial developer test processes through repository mining

    NARCIS (Netherlands)

    Zaidman, A.; Van Rompaey, B.; Van Deursen, A.; Demeyer, S.

    2010-01-01

    Many software production processes advocate rigorous development testing alongside functional code writing, which implies that both test code and production code should co-evolve. To gain insight in the nature of this co-evolution, this paper proposes three views (realized by a tool called TeMo)

  9. Numerical study of furnace process of a 600 MW pulverized coal boiler under low load with SNCR application

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Q.X.; Shi, Y.; Liu, H.; Yang, C.H.; Wu, S.H. [Harbin Institute of Technology, Harbin (China)

    2013-07-01

    Numerical simulation of flow, heat transfer, and combustion process in a 600MW pulverized coal boiler under low load is performed using Computational Fluid Dynamics (CFD) code Fluent. The distributions of temperature and species were obtained and their influences on Selective non-catalytic reduction (SNCR) were analyzed. The results indicate that the furnace temperature changed significantly as the operation load declines. The furnace space with proper temperature for SNCR reaction becomes lower with decreasing of operation load. As the load falls off, the available O{sub 2}concentration for SNCR reactions rises gently and the initial NOx concentration for SNCR reactions debases slightly. These variations can have some influence on the SNCR process. For the upper furnace where the temperature is suitable for SNCR reactions, the CO concentration is close to 0 under different load. Consequently, the SNCR process will not be affected by CO based on the calculation in this work.

  10. User's manual for the Oak Ridge Tokamak Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Munro, J.K.; Hogan, J.T.; Howe, H.C.; Arnurius, D.E.

    1977-02-01

    A one-dimensional tokamak transport code is described which simulates a plasma discharge using a fluid model which includes power balances for electrons and ions, conservation of mass, and Maxwell's equations. The modular structure of the code allows a user to add models of various physical processes which can modify the discharge behavior. Such physical processes treated in the version of the code described here include effects of plasma transport, neutral gas transport, impurity diffusion, and neutral beam injection. Each process can be modeled by a parameterized analytic formula or at least one detailed numerical calculation. The program logic of each module is presented, followed by detailed descriptions of each subroutine used by the module. The physics underlying the models is only briefly summarized. The transport code was written in IBM FORTRAN-IV and implemented on IBM 360/370 series computers at the Oak Ridge National Laboratory and on the CDC 7600 computers of the Magnetic Fusion Energy (MFE) Computing Center of the Lawrence Livermore Laboratory. A listing of the current reference version is provided on accompanying microfiche.

  11. CONTAIN code analyses of direct containment heating experiments

    International Nuclear Information System (INIS)

    Williams, D.C.; Griffith, R.O.; Tadios, E.L.; Washington, K.E.

    1995-01-01

    In some nuclear reactor core-melt accidents, a potential exists for molten core-debris to be dispersed into the containment under high pressure. Resulting energy transfer to the containment atmosphere can pressurize the containment. This process, known as direct containment heating (DCH), has been the subject of extensive experimental and analytical programs sponsored by the U.S. Nuclear Regulatory Commission (NRC). The DCH modeling has been an important focus for the development of the CONTAIN code. Results of a detailed independent peer review of the CONTAIN code were published recently. This paper summarizes work performed in support of the peer review in which the CONTAIN code was applied to analyze DCH experiments. Goals of this work were comparison of calculated and experimental results, CONTAIN DCH model assessment, and development of guidance for code users, including development of a standardized input prescription for DCH analysis

  12. A Deformation Analysis Code of CANDU Fuel under the Postulated Accident: ELOCA

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan; Jung, Jong Yeob

    2006-11-15

    Deformations of the fuel element or fuel channel might be the main cause of the fuel failure. Therefore, the accurate prediction of the deformation and the analysis capabilities are closely related to the increase of the safety margin of the reactor. In this report, among the performance analysis or the transient behavior prediction computer codes, the analysis codes for deformation such as the ELOCA, HOTSPOT, CONTACT-1, and PTDFORM are briefly introduced and each code's objectives, applicability, and relations are explained. Especially, the user manual for ELOCA code which is the analysis code for the fuel deformation and the release of fission product during the transient period after the postulated accidents is provided so that it can be the guidance to the potential users of the code and save the time and economic loss by reducing the trial and err000.

  13. A Deformation Analysis Code of CANDU Fuel under the Postulated Accident: ELOCA

    International Nuclear Information System (INIS)

    Park, Joo Hwan; Jung, Jong Yeob

    2006-11-01

    Deformations of the fuel element or fuel channel might be the main cause of the fuel failure. Therefore, the accurate prediction of the deformation and the analysis capabilities are closely related to the increase of the safety margin of the reactor. In this report, among the performance analysis or the transient behavior prediction computer codes, the analysis codes for deformation such as the ELOCA, HOTSPOT, CONTACT-1, and PTDFORM are briefly introduced and each code's objectives, applicability, and relations are explained. Especially, the user manual for ELOCA code which is the analysis code for the fuel deformation and the release of fission product during the transient period after the postulated accidents is provided so that it can be the guidance to the potential users of the code and save the time and economic loss by reducing the trial and error

  14. Code development for nuclear reactor simulation

    International Nuclear Information System (INIS)

    Chauliac, C.; Verwaerde, D.; Pavageau, O.

    2006-01-01

    Full text of publication follows: Since several years, CEA, EDF and FANP have developed several numerical codes which are currently used for nuclear industry applications and will be remain in use for the coming years. Complementary to this set of codes and in order to better meet the present and future needs, a new system is being developed through a joint venture between CEA, EDF and FANP, with a ten year prospect and strong intermediate milestones. The focus is put on a multi-scale and multi-physics approach enabling to take into account phenomena from microscopic to macroscopic scale, and to describe interactions between various physical fields such as neutronics (DESCARTES), thermal-hydraulics (NEPTUNE) and fuel behaviour (PLEIADES). This approach is based on a more rational design of the softwares and uses a common integration platform providing pre-processing, supervision of computation and post-processing. This paper will describe the overall system under development and present the first results obtained. (authors)

  15. Code for calculation of spreading of radioactivity in reactor containment systems

    International Nuclear Information System (INIS)

    Vertes, P.

    1992-09-01

    A detailed description of the new version of TIBSO code is given, with applications for accident analysis in a reactor containment system. The TIBSO code can follow the nuclear transition and the spatial migration of radioactive materials. The modelling of such processes is established in a very flexible way enabling the user to investigate a wide range of problems. The TIBSO code system is described in detail, taking into account the new developments since 1983. Most changes improve the capabilities of the code. The new version of TIBSO system is written in FORTRAN-77 and can be operated both under VAX VMS and PC DOS. (author) 5 refs.; 3 figs.; 21 tabs

  16. Identification of important phenomena under sodium fire accidents based on PIRT process with factor analysis in sodium-cooled fast reactor

    International Nuclear Information System (INIS)

    Aoyagi, Mitsuhiro; Uchibori, Akihiro; Kikuchi, Shin; Takata, Takashi; Ohno, Shuji; Ohshima, Hiroyuki

    2016-01-01

    The PIRT (Phenomena Identification and Ranking Table) process is an effective method to identify key phenomena involved in safety issues in nuclear power plants. The present PIRT process is aimed to validate sodium fire analysis codes. Because a sodium fire accident in sodium-cooled fast reactor (SFR) involves complex phenomena, various figures of merit (FOMs) could exist in this PIRT process. In addition, importance evaluation of phenomena for each FOM should be implemented in an objective manner under the PIRT process. This paper describes the methodology for specification of FOMs, identification of associated phenomena and importance evaluation of each associated phenomenon in order to complete a ranking table of important phenomena involved in a sodium fire accident in an SFR. The FOMs were specified through factor analysis in this PIRT process. Physical parameters to be quantified by a sodium fire analysis code were identified by considering concerns resulting from sodium fire in the factor analysis. Associated phenomena were identified through the element- and sequence-based phenomena analyses as is often conducted in PIRT processes. Importance of each associated phenomenon was evaluated by considering the sequence-based analysis of associated phenomena correlated with the FOMs. Then, we complete the ranking table through the factor and phenomenon analyses. (author)

  17. Computer Code for Nanostructure Simulation

    Science.gov (United States)

    Filikhin, Igor; Vlahovic, Branislav

    2009-01-01

    Due to their small size, nanostructures can have stress and thermal gradients that are larger than any macroscopic analogue. These gradients can lead to specific regions that are susceptible to failure via processes such as plastic deformation by dislocation emission, chemical debonding, and interfacial alloying. A program has been developed that rigorously simulates and predicts optoelectronic properties of nanostructures of virtually any geometrical complexity and material composition. It can be used in simulations of energy level structure, wave functions, density of states of spatially configured phonon-coupled electrons, excitons in quantum dots, quantum rings, quantum ring complexes, and more. The code can be used to calculate stress distributions and thermal transport properties for a variety of nanostructures and interfaces, transport and scattering at nanoscale interfaces and surfaces under various stress states, and alloy compositional gradients. The code allows users to perform modeling of charge transport processes through quantum-dot (QD) arrays as functions of inter-dot distance, array order versus disorder, QD orientation, shape, size, and chemical composition for applications in photovoltaics and physical properties of QD-based biochemical sensors. The code can be used to study the hot exciton formation/relation dynamics in arrays of QDs of different shapes and sizes at different temperatures. It also can be used to understand the relation among the deposition parameters and inherent stresses, strain deformation, heat flow, and failure of nanostructures.

  18. New Advances in Photoionisation Codes: How and what for?

    International Nuclear Information System (INIS)

    Ercolano, Barbara

    2005-01-01

    The study of photoionised gas in planetary nebulae (PNe) has played a major role in the achievement, over the years, of a better understanding of a number of physical processes, pertinent to a broader range of fields than that of PNe studies, spanning from atomic physics to stellar evolution theories. Whilst empirical techniques are routinely employed for the analysis of the emission line spectra of these objects, the accurate interpretation of the observational data often requires the solution of a set of coupled equations, via the application of a photoionisation/plasma code. A number of large-scale codes have been developed since the late sixties, using various analytical or statistical techniques for the transfer of continuum radiation, mainly under the assumption of spherical symmetry and a few in 3D. These codes have been proved to be powerful and in many cases essential tools, but a clear idea of the underlying physical processes and assumptions is necessary in order to avoid reaching misleading conclusions.The development of the codes over the years has been driven by the observational constraints available, but also compromised by the available computer power. Modern codes are faster and more flexible, with the ultimate goal being the achievement of a description of the observations relying on the smallest number of parameters possible. In this light recent developments have been focused on the inclusion of new available atomic data, the inclusion of a realistic treatment for dust grains mixed in the ionised and photon dominated regions (PDRs) and the expansion of some codes to PDRs with the inclusion of chemical reaction networks. Furthermore the last few years have seen the development of fully 3D photoionisation codes based on the Monte Carlo method.A brief review of the field of photoionisation today is given here, with emphasis on the recent developments, including the expansion of the models to the 3D domain. Attention is given to the identification

  19. Multiple optical code-label processing using multi-wavelength frequency comb generator and multi-port optical spectrum synthesizer.

    Science.gov (United States)

    Moritsuka, Fumi; Wada, Naoya; Sakamoto, Takahide; Kawanishi, Tetsuya; Komai, Yuki; Anzai, Shimako; Izutsu, Masayuki; Kodate, Kashiko

    2007-06-11

    In optical packet switching (OPS) and optical code division multiple access (OCDMA) systems, label generation and processing are key technologies. Recently, several label processors have been proposed and demonstrated. However, in order to recognize N different labels, N separate devices are required. Here, we propose and experimentally demonstrate a large-scale, multiple optical code (OC)-label generation and processing technology based on multi-port, a fully tunable optical spectrum synthesizer (OSS) and a multi-wavelength electro-optic frequency comb generator. The OSS can generate 80 different OC-labels simultaneously and can perform 80-parallel matched filtering. We also demonstrated its application to OCDMA.

  20. LFSC - Linac Feedback Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, Valentin; /Fermilab

    2008-05-01

    The computer program LFSC (Code>) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output.

  1. Code Development in Coupled PARCS/RELAP5 for Supercritical Water Reactor

    Directory of Open Access Journals (Sweden)

    Po Hu

    2014-01-01

    Full Text Available The new capability is added to the existing coupled code package PARCS/RELAP5, in order to analyze SCWR design under supercritical pressure with the separated water coolant and moderator channels. This expansion is carried out on both codes. In PARCS, modification is focused on extending the water property tables to supercritical pressure, modifying the variable mapping input file and related code module for processing thermal-hydraulic information from separated coolant/moderator channels, and modifying neutronics feedback module to deal with the separated coolant/moderator channels. In RELAP5, modification is focused on incorporating more accurate water properties near SCWR operation/transient pressure and temperature in the code. Confirming tests of the modifications is presented and the major analyzing results from the extended codes package are summarized.

  2. Status of reactor core design code system in COSINE code package

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y.; Yu, H.; Liu, Z., E-mail: yuhui@snptc.com.cn [State Nuclear Power Software Development Center, SNPTC, National Energy Key Laboratory of Nuclear Power Software (NEKLS), Beijiing (China)

    2014-07-01

    For self-reliance, COre and System INtegrated Engine for design and analysis (COSINE) code package is under development in China. In this paper, recent development status of the reactor core design code system (including the lattice physics code and the core simulator) is presented. The well-established theoretical models have been implemented. The preliminary verification results are illustrated. And some special efforts, such as updated theory models and direct data access application, are also made to achieve better software product. (author)

  3. Status of reactor core design code system in COSINE code package

    International Nuclear Information System (INIS)

    Chen, Y.; Yu, H.; Liu, Z.

    2014-01-01

    For self-reliance, COre and System INtegrated Engine for design and analysis (COSINE) code package is under development in China. In this paper, recent development status of the reactor core design code system (including the lattice physics code and the core simulator) is presented. The well-established theoretical models have been implemented. The preliminary verification results are illustrated. And some special efforts, such as updated theory models and direct data access application, are also made to achieve better software product. (author)

  4. 9 CFR 355.25 - Canning with heat processing and hermetically sealed containers; closures; code marking; heat...

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Canning with heat processing and hermetically sealed containers; closures; code marking; heat processing; incubation. 355.25 Section 355.25... IDENTIFICATION AS TO CLASS, QUALITY, QUANTITY, AND CONDITION Inspection Procedure § 355.25 Canning with heat...

  5. Self-complementary circular codes in coding theory.

    Science.gov (United States)

    Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz

    2018-04-01

    Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.

  6. User's manual for seismic analysis code 'SONATINA-2V'

    Energy Technology Data Exchange (ETDEWEB)

    Hanawa, Satoshi; Iyoku, Tatsuo [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment

    2001-08-01

    The seismic analysis code, SONATINA-2V, has been developed to analyze the behavior of the HTTR core graphite components under seismic excitation. The SONATINA-2V code is a two-dimensional computer program capable of analyzing the vertical arrangement of the HTTR graphite components, such as fuel blocks, replaceable reflector blocks, permanent reflector blocks, as well as their restraint structures. In the analytical model, each block is treated as rigid body and is restrained by dowel pins which restrict relative horizontal movement but allow vertical and rocking motions between upper and lower blocks. Moreover, the SONATINA-2V code is capable of analyzing the core vibration behavior under both simultaneous excitations of vertical and horizontal directions. The SONATINA-2V code is composed of the main program, pri-processor for making the input data to SONATINA-2V and post-processor for data processing and making the graphics from analytical results. Though the SONATINA-2V code was developed in order to work in the MSP computer system of Japan Atomic Energy Research Institute (JAERI), the computer system was abolished with the technical progress of computer. Therefore, improvement of this analysis code was carried out in order to operate the code under the UNIX machine, SR8000 computer system, of the JAERI. The users manual for seismic analysis code, SONATINA-2V, including pri- and post-processor is given in the present report. (author)

  7. Development of CAD-Based Geometry Processing Module for a Monte Carlo Particle Transport Analysis Code

    International Nuclear Information System (INIS)

    Choi, Sung Hoon; Kwark, Min Su; Shim, Hyung Jin

    2012-01-01

    As The Monte Carlo (MC) particle transport analysis for a complex system such as research reactor, accelerator, and fusion facility may require accurate modeling of the complicated geometry. Its manual modeling by using the text interface of a MC code to define the geometrical objects is tedious, lengthy and error-prone. This problem can be overcome by taking advantage of modeling capability of the computer aided design (CAD) system. There have been two kinds of approaches to develop MC code systems utilizing the CAD data: the external format conversion and the CAD kernel imbedded MC simulation. The first approach includes several interfacing programs such as McCAD, MCAM, GEOMIT etc. which were developed to automatically convert the CAD data into the MCNP geometry input data. This approach makes the most of the existing MC codes without any modifications, but implies latent data inconsistency due to the difference of the geometry modeling system. In the second approach, a MC code utilizes the CAD data for the direct particle tracking or the conversion to an internal data structure of the constructive solid geometry (CSG) and/or boundary representation (B-rep) modeling with help of a CAD kernel. MCNP-BRL and OiNC have demonstrated their capabilities of the CAD-based MC simulations. Recently we have developed a CAD-based geometry processing module for the MC particle simulation by using the OpenCASCADE (OCC) library. In the developed module, CAD data can be used for the particle tracking through primitive CAD surfaces (hereafter the CAD-based tracking) or the internal conversion to the CSG data structure. In this paper, the performances of the text-based model, the CAD-based tracking, and the internal CSG conversion are compared by using an in-house MC code, McSIM, equipped with the developed CAD-based geometry processing module

  8. Neurophysiological processes and functional neuroanatomical structures underlying proactive effects of emotional conflicts.

    Science.gov (United States)

    Schreiter, Marie Luise; Chmielewski, Witold; Beste, Christian

    2018-07-01

    There is a strong inter-relation of cognitive and emotional processes as evidenced by emotional conflict monitoring processes. In the cognitive domain, proactive effects of conflicts have widely been studied; i.e. effects of conflicts in the n-1 trial on trial n. Yet, the neurophysiological processes and associated functional neuroanatomical structures underlying such proactive effects during emotional conflicts have not been investigated. This is done in the current study combining EEG recordings with signal decomposition methods and source localization approaches. We show that an emotional conflict in the n-1 trial differentially influences processing of positive and negative emotions in trial n, but not the processing of conflicts in trial n. The dual competition framework stresses the importance of dissociable 'perceptual' and 'response selection' or cognitive control levels for interactive effects of cognition and emotion. Only once these coding levels were isolated in the neurophysiological data, processes explaining the behavioral effects were detectable. The data show that there is not only a close correspondence between theoretical propositions of the dual competition framework and neurophysiological processes. Rather, processing levels conceptualized in the framework operate in overlapping time windows, but are implemented via distinct functional neuroanatomical structures; the precuneus (BA31) and the insula (BA13). It seems that decoding of information in the precuneus, as well as the integration of information during response selection in the insula is more difficult when confronted with angry facial emotions whenever cognitive control resources have been highly taxed by previous conflicts. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Component processes underlying future thinking.

    Science.gov (United States)

    D'Argembeau, Arnaud; Ortoleva, Claudia; Jumentier, Sabrina; Van der Linden, Martial

    2010-09-01

    This study sought to investigate the component processes underlying the ability to imagine future events, using an individual-differences approach. Participants completed several tasks assessing different aspects of future thinking (i.e., fluency, specificity, amount of episodic details, phenomenology) and were also assessed with tasks and questionnaires measuring various component processes that have been hypothesized to support future thinking (i.e., executive processes, visual-spatial processing, relational memory processing, self-consciousness, and time perspective). The main results showed that executive processes were correlated with various measures of future thinking, whereas visual-spatial processing abilities and time perspective were specifically related to the number of sensory descriptions reported when specific future events were imagined. Furthermore, individual differences in self-consciousness predicted the subjective feeling of experiencing the imagined future events. These results suggest that future thinking involves a collection of processes that are related to different facets of future-event representation.

  10. Parallel processing is good for your scientific codes...But massively parallel processing is so much better

    International Nuclear Information System (INIS)

    Thomas, B.; Domain, Ch.; Souffez, Y.; Eon-Duval, P.

    1998-01-01

    Harnessing the power of many computers, to solve concurrently difficult scientific problems, is one of the most innovative trend in High Performance Computing. At EDF, we have invested in parallel computing and have achieved significant results. First we improved the processing speed of strategic codes, in order to extend their scope. Then we turned to numerical simulations at the atomic scale. These computations, we never dreamt of before, provided us with a better understanding of metallurgic phenomena. More precisely we were able to trace defects in alloys that are used in nuclear power plants. (author)

  11. Gas processing in the nuclear industry

    Energy Technology Data Exchange (ETDEWEB)

    Kovach, J.L.

    1995-02-01

    This article is a brief overview of code requirements in the nuclear air cleaning arena. NRC standards, which employ the various ASME codes, are noted. It is also noted that DOE facilities do not fall under the purview of the NRC and that DOE facilities (especially fuel cycle facilities) typically have broader gas processing activities than for power reactors. The typical differences between DOE facilities` and power reactor facilities` gas processing needs are listed, as are DOE facility components not covered by the ASME AG-1 code.

  12. Rulemaking efforts on codes and standards

    International Nuclear Information System (INIS)

    Millman, G.C.

    1992-01-01

    Section 50.55a of the NRC regulations provides a mechanism for incorporating national codes and standards into the regulatory process. It incorporates by reference ASME Boiler and Pressure Vessel Code (ASME B and PV Code) Section 3 rules for construction and Section 11 rules for inservice inspection and inservice testing. The regulation is periodically amended to update these references. The rulemaking process, as applied to Section 50.55a amendments, is overviewed to familiarize users with associated internal activities of the NRC staff and the manner in which public comments are integrated into the process. The four ongoing rulemaking actions that would individually amend Section 50.55a are summarized. Two of the actions would directly impact requirements for inservice testing. Benefits accrued with NRC endorsement of the ASME B and PV Code, and possible future endorsement of the ASME Operations and Maintenance Code (ASME OM Code), are identified. Emphasis is placed on the need for code writing committees to be especially sensitive to user feedback on code rules incorporated into the regulatory process to ensure that the rules are complete, technically accurate, clear, practical, and enforceable

  13. Sandia National Laboratories analysis code data base

    Science.gov (United States)

    Peterson, C. W.

    1994-11-01

    Sandia National Laboratories' mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The laboratories' strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia's technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems, and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code 'ownership' and release status, and references describing the physical models and numerical implementation.

  14. Sandia National Laboratories analysis code data base

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, C.W.

    1994-11-01

    Sandia National Laboratories, mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The Laboratories` strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia`s technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code ``ownership`` and release status, and references describing the physical models and numerical implementation.

  15. Effect of interpolation error in pre-processing codes on calculations of self-shielding factors and their temperature derivatives

    International Nuclear Information System (INIS)

    Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullan, D.E.

    1986-01-01

    We investigate the effect of interpolation error in the pre-processing codes LINEAR, RECENT and SIGMA1 on calculations of self-shielding factors and their temperature derivatives. We consider the 2.0347 to 3.3546 keV energy region for 238 U capture, which is the NEACRP benchmark exercise on unresolved parameters. The calculated values of temperature derivatives of self-shielding factors are significantly affected by interpolation error. The sources of problems in both evaluated data and codes are identified and eliminated in the 1985 version of these codes. This paper helps to (1) inform code users to use only 1985 versions of LINEAR, RECENT, and SIGMA1 and (2) inform designers of other code systems where they may have problems and what to do to eliminate their problems. (author)

  16. Effect of interpolation error in pre-processing codes on calculations of self-shielding factors and their temperature derivatives

    International Nuclear Information System (INIS)

    Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullen, D.E.

    1985-01-01

    The authors investigate the effect of interpolation error in the pre-processing codes LINEAR, RECENT and SIGMA1 on calculations of self-shielding factors and their temperature derivatives. They consider the 2.0347 to 3.3546 keV energy region for /sup 238/U capture, which is the NEACRP benchmark exercise on unresolved parameters. The calculated values of temperature derivatives of self-shielding factors are significantly affected by interpolation error. The sources of problems in both evaluated data and codes are identified and eliminated in the 1985 version of these codes. This paper helps to (1) inform code users to use only 1985 versions of LINEAR, RECENT, and SIGMA1 and (2) inform designers of other code systems where they may have problems and what to do to eliminate their problems

  17. Error-Rate Bounds for Coded PPM on a Poisson Channel

    Science.gov (United States)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  18. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    Science.gov (United States)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  19. LFSC - Linac Feedback Simulation Code

    International Nuclear Information System (INIS)

    Ivanov, Valentin; Fermilab

    2008-01-01

    The computer program LFSC ( ) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output

  20. User input verification and test driven development in the NJOY21 nuclear data processing code

    Energy Technology Data Exchange (ETDEWEB)

    Trainer, Amelia Jo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Conlin, Jeremy Lloyd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); McCartney, Austin Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-21

    Before physically-meaningful data can be used in nuclear simulation codes, the data must be interpreted and manipulated by a nuclear data processing code so as to extract the relevant quantities (e.g. cross sections and angular distributions). Perhaps the most popular and widely-trusted of these processing codes is NJOY, which has been developed and improved over the course of 10 major releases since its creation at Los Alamos National Laboratory in the mid-1970’s. The current phase of NJOY development is the creation of NJOY21, which will be a vast improvement from its predecessor, NJOY2016. Designed to be fast, intuitive, accessible, and capable of handling both established and modern formats of nuclear data, NJOY21 will address many issues that many NJOY users face, while remaining functional for those who prefer the existing format. Although early in its development, NJOY21 is quickly providing input validation to check user input. By providing rapid and helpful responses to users while writing input files, NJOY21 will prove to be more intuitive and easy to use than any of its predecessors. Furthermore, during its development, NJOY21 is subject to regular testing, such that its test coverage must strictly increase with the addition of any production code. This thorough testing will allow developers and NJOY users to establish confidence in NJOY21 as it gains functionality. This document serves as a discussion regarding the current state input checking and testing practices of NJOY21.

  1. DISP1 code

    International Nuclear Information System (INIS)

    Vokac, P.

    1999-12-01

    DISP1 code is a simple tool for assessment of the dispersion of the fission product cloud escaping from a nuclear power plant after an accident. The code makes it possible to tentatively check the feasibility of calculations by more complex PSA3 codes and/or codes for real-time dispersion calculations. The number of input parameters is reasonably low and the user interface is simple enough to allow a rapid processing of sensitivity analyses. All input data entered through the user interface are stored in the text format. Implementation of dispersion model corrections taken from the ARCON96 code enables the DISP1 code to be employed for assessment of the radiation hazard within the NPP area, in the control room for instance. (P.A.)

  2. Making Visible the Coding Process: Using Qualitative Data Software in a Post-Structural Study

    Science.gov (United States)

    Ryan, Mary

    2009-01-01

    Qualitative research methods require transparency to ensure the "trustworthiness" of the data analysis. The intricate processes of organising, coding and analysing the data are often rendered invisible in the presentation of the research findings, which requires a "leap of faith" for the reader. Computer assisted data analysis software can be used…

  3. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  4. Hermitian self-dual quasi-abelian codes

    Directory of Open Access Journals (Sweden)

    Herbert S. Palines

    2017-12-01

    Full Text Available Quasi-abelian codes constitute an important class of linear codes containing theoretically and practically interesting codes such as quasi-cyclic codes, abelian codes, and cyclic codes. In particular, the sub-class consisting of 1-generator quasi-abelian codes contains large families of good codes. Based on the well-known decomposition of quasi-abelian codes, the characterization and enumeration of Hermitian self-dual quasi-abelian codes are given. In the case of 1-generator quasi-abelian codes, we offer necessary and sufficient conditions for such codes to be Hermitian self-dual and give a formula for the number of these codes. In the case where the underlying groups are some $p$-groups, the actual number of resulting Hermitian self-dual quasi-abelian codes are determined.

  5. The Redox Code.

    Science.gov (United States)

    Jones, Dean P; Sies, Helmut

    2015-09-20

    The redox code is a set of principles that defines the positioning of the nicotinamide adenine dinucleotide (NAD, NADP) and thiol/disulfide and other redox systems as well as the thiol redox proteome in space and time in biological systems. The code is richly elaborated in an oxygen-dependent life, where activation/deactivation cycles involving O₂ and H₂O₂ contribute to spatiotemporal organization for differentiation, development, and adaptation to the environment. Disruption of this organizational structure during oxidative stress represents a fundamental mechanism in system failure and disease. Methodology in assessing components of the redox code under physiological conditions has progressed, permitting insight into spatiotemporal organization and allowing for identification of redox partners in redox proteomics and redox metabolomics. Complexity of redox networks and redox regulation is being revealed step by step, yet much still needs to be learned. Detailed knowledge of the molecular patterns generated from the principles of the redox code under defined physiological or pathological conditions in cells and organs will contribute to understanding the redox component in health and disease. Ultimately, there will be a scientific basis to a modern redox medicine.

  6. Comparison of strongly heat-driven flow codes for unsaturated media

    International Nuclear Information System (INIS)

    Updegraff, C.D.

    1989-08-01

    Under the sponsorship of the US Nuclear Regulatory Commission, Sandia National Laboratories (SNL) is developing a performance assessment methodology for the analysis of long-term disposal of high-level radioactive waste (HLW) in unsaturated welded tuff. As part of this effort, SNL evaluated existing strongly heat-driven flow computer codes for simulating ground-water flow in unsaturated media. The three codes tested, NORIA, PETROS, and TOUGH, were compared against a suite of problems for which analytical and numerical solutions or experimental results exist. The problems were selected to test the abilities of the codes to simulate situations ranging from simple, uncoupled processes, such as two-phase flow or heat transfer, to fully coupled processes, such as vaporization caused by high temperatures. In general, all three codes were found to be difficult to use because of (1) built-in time stepping criteria, (2) the treatment of boundary conditions, and (3) handling of evaporation/condensation problems. A drawback of the study was that adequate problems related to expected repository conditions were not available in the literature. Nevertheless, the results of this study suggest the need for thorough investigations of the impact of heat on the flow field in the vicinity of an unsaturated HLW repository. Recommendations are to develop a new flow code combining the best features of these three codes and eliminating the worst ones. 19 refs., 49 figs

  7. Identification and Functional Analysis of Long Intergenic Non-coding RNAs Underlying Intramuscular Fat Content in Pigs

    Directory of Open Access Journals (Sweden)

    Cheng Zou

    2018-03-01

    Full Text Available Intramuscular fat (IMF content is an important trait that can affect pork quality. Previous studies have identified many genes that can regulate IMF. Long intergenic non-coding RNAs (lincRNAs are emerging as key regulators in various biological processes. However, lincRNAs related to IMF in pig are largely unknown, and the mechanisms by which they regulate IMF are yet to be elucidated. Here we reconstructed 105,687 transcripts and identified 1,032 lincRNAs in pig longissimus dorsi muscle (LDM of four stages with different IMF contents based on published RNA-seq. These lincRNAs show typical characteristics such as shorter length and lower expression compared with protein-coding genes. Combined with methylation data, we found that both the promoter and genebody methylation of lincRNAs can negatively regulate lincRNA expression. We found that lincRNAs exhibit high correlation with their protein-coding neighbors in expression. Co-expression network analysis resulted in eight stage-specific modules, gene ontology and pathway analysis of them suggested that some lincRNAs were involved in IMF-related processes, such as fatty acid metabolism and peroxisome proliferator-activated receptor signaling pathway. Furthermore, we identified hub lincRNAs and found six of them may play important roles in IMF development. This work detailed some lincRNAs which may affect of IMF development in pig, and facilitated future research on these lincRNAs and molecular assisted breeding for pig.

  8. A Critical Appraisal of the Juvenile Justice System under Cameroon's 2005 Criminal Procedure Code: Emerging Challenges

    Directory of Open Access Journals (Sweden)

    S Tabe

    2012-03-01

    Full Text Available The objective of this article is to examine the changes introduced by the 2005 Cameroonian Criminal Procedure Code on matters of juvenile justice, considering that before this Code, juvenile justice in Cameroon was governed by extra-national laws. In undertaking this analysis, the article highlights the evolution of the administration of juvenile justice 50 years after independence of Cameroon. It also points out the various difficulties and shortcomings in the treatment of juvenile offenders in Cameroon since the enactment of the new Criminal Procedure Code. The article reveals that the 2005 Code is an amalgamation of all hitherto existing laws in the country that pertained to juvenile justice, and that despite the considerable amount of criticism it has received, the Code is clearly an improvement of the system of juvenile justice in Cameroon, since it represents a balance of the due process rights of young people, the protection of society and the special needs of young offenders. This is so because the drafters of the Code took a broad view of the old laws on juvenile justice. Also a wide range of groups were consulted, including criminal justice professionals, children’s service organisations, victims, parents, young offenders, educators, advocacy groups and social-policy analysts. However, to address the challenges that beset the juvenile justice system of Cameroon, the strategy of the government should be focussed on three areas: the prevention of youth crime, the provision of meaningful consequences for the actions of young people, and the rehabilitation and reintegration of young offenders. Cameroonian law should seek educative solutions rather than to impose prison sentences or other repressive measures on young offenders. Special courts to deal with young offenders should be established outside the regular penal system and should be provided with resources that are adequate for and appropriate to fostering their understanding of

  9. The mining code under the light of shale gas

    International Nuclear Information System (INIS)

    Dubreuil, Thomas; Romi, Raphael

    2013-01-01

    The authors analyze the evolution and challenges of the French legal context, notably the French mining code, in relationship with the emergence of the issue of shale gas exploitation. They first draw lessons from the law published in 2011 which focused on the use of the hydraulic fracturing technique to forbid any non conventional hydrocarbon exploitation. They comment the content of different legal or official texts which have been published since then, and which notably evoked the use of other exploration and exploitation techniques and weakened the 2011 law. In a second part, they discuss political issues such as the influence of the European framework on the energy policy, and the integration of mining, energy and land planning policies which puts the mining code into question

  10. Application of data analysis techniques to nuclear reactor systems code to accuracy assessment

    International Nuclear Information System (INIS)

    Kunz, R.F.; Kasmala, G.F.; Murray, C.J.; Mahaffy, J.H.

    2000-01-01

    An automated code assessment program (ACAP) has been developed by the authors to provide quantitative comparisons between nuclear reactor systems (NRS) code results and experimental measurements. This software was developed under subcontract to the United States Nuclear Regulatory Commission for use in its NRS code consolidation efforts. In this paper, background on the topic of NRS accuracy and uncertainty assessment is provided which motivates the development of and defines basic software requirements for ACAP. A survey of data analysis techniques was performed, focusing on the applicability of methods in the construction of NRS code-data comparison measures. The results of this review process, which further defined the scope, user interface and process for using ACAP are also summarized. A description of the software package and several sample applications to NRS data sets are provided. Its functionality and ability to provide objective accuracy assessment figures are demonstrated. (author)

  11. 21 CFR 106.90 - Coding.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false Coding. 106.90 Section 106.90 Food and Drugs FOOD... of Infant Formulas § 106.90 Coding. The manufacturer shall code all infant formulas in conformity with the coding requirements that are applicable to thermally processed low-acid foods packaged in...

  12. Code Development on Fission Product Behavior under Severe Accident-Validation of Aerosol Sedimentation

    International Nuclear Information System (INIS)

    Ha, Kwang Soon; Kim, Sung Il; Jang, Jin Sung; Kim, Dong Ha

    2016-01-01

    The gas and aerosol phases of the radioactive materials move through the reactor coolant systems and containments as loaded on the carrier gas or liquid, such as steam or water. Most radioactive materials might escape in the form of aerosols from a nuclear power plant during a severe reactor accident, and it is very important to predict the behavior of these radioactive aerosols in the reactor cooling system and in the containment building under severe accident conditions. Aerosols are designated as very small solid particles or liquid droplets suspended in a gas phase. The suspended solid or liquid particles typically have a range of sizes of 0.01 m to 20 m. Aerosol concentrations in reactor accident analyses are typically less than 100 g/m3 and usually less than 1 g/m3. When there are continuing sources of aerosol to the gas phase or when there are complicated processes involving engineered safety features, much more complicated size distributions develop. It is not uncommon for aerosols in reactor containments to have bimodal size distributions for at least some significant periods of time early during an accident. Salient features of aerosol physics under reactor accident conditions that will affect the nature of the aerosols are (1) the formation of aerosol particles, (2) growth of aerosol particles, (3) shape of aerosol particles. At KAERI, a fission product module has been developed to predict the behaviors of the radioactive materials in the reactor coolant system under severe accident conditions. The fission product module consists of an estimation of the initial inventories, species release from the core, aerosol generation, gas transport, and aerosol transport. The final outcomes of the fission product module designate the radioactive gas and aerosol distribution in the reactor coolant system. The aerosol sedimentation models in the fission product module were validated using ABCOVE and LACE experiments. There were some discrepancies on the predicted

  13. Code Development on Fission Product Behavior under Severe Accident-Validation of Aerosol Sedimentation

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Kwang Soon; Kim, Sung Il; Jang, Jin Sung; Kim, Dong Ha [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The gas and aerosol phases of the radioactive materials move through the reactor coolant systems and containments as loaded on the carrier gas or liquid, such as steam or water. Most radioactive materials might escape in the form of aerosols from a nuclear power plant during a severe reactor accident, and it is very important to predict the behavior of these radioactive aerosols in the reactor cooling system and in the containment building under severe accident conditions. Aerosols are designated as very small solid particles or liquid droplets suspended in a gas phase. The suspended solid or liquid particles typically have a range of sizes of 0.01 m to 20 m. Aerosol concentrations in reactor accident analyses are typically less than 100 g/m3 and usually less than 1 g/m3. When there are continuing sources of aerosol to the gas phase or when there are complicated processes involving engineered safety features, much more complicated size distributions develop. It is not uncommon for aerosols in reactor containments to have bimodal size distributions for at least some significant periods of time early during an accident. Salient features of aerosol physics under reactor accident conditions that will affect the nature of the aerosols are (1) the formation of aerosol particles, (2) growth of aerosol particles, (3) shape of aerosol particles. At KAERI, a fission product module has been developed to predict the behaviors of the radioactive materials in the reactor coolant system under severe accident conditions. The fission product module consists of an estimation of the initial inventories, species release from the core, aerosol generation, gas transport, and aerosol transport. The final outcomes of the fission product module designate the radioactive gas and aerosol distribution in the reactor coolant system. The aerosol sedimentation models in the fission product module were validated using ABCOVE and LACE experiments. There were some discrepancies on the predicted

  14. Verification of Dinamika-5 code on experimental data of water level behaviour in PGV-440 under dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Beljaev, Y.V.; Zaitsev, S.I.; Tarankov, G.A. [OKB Gidropress (Russian Federation)

    1995-12-31

    Comparison of the results of calculational analysis with experimental data on water level behaviour in horizontal steam generator (PGV-440) under the conditions with cessation of feedwater supply is presented in the report. Calculational analysis is performed using DIMANIKA-5 code, experimental data are obtained at Kola NPP-4. (orig.). 2 refs.

  15. Verification of Dinamika-5 code on experimental data of water level behaviour in PGV-440 under dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Beljaev, Y V; Zaitsev, S I; Tarankov, G A [OKB Gidropress (Russian Federation)

    1996-12-31

    Comparison of the results of calculational analysis with experimental data on water level behaviour in horizontal steam generator (PGV-440) under the conditions with cessation of feedwater supply is presented in the report. Calculational analysis is performed using DIMANIKA-5 code, experimental data are obtained at Kola NPP-4. (orig.). 2 refs.

  16. Stability analysis by ERATO code

    International Nuclear Information System (INIS)

    Tsunematsu, Toshihide; Takeda, Tatsuoki; Matsuura, Toshihiko; Azumi, Masafumi; Kurita, Gen-ichi

    1979-12-01

    Problems in MHD stability calculations by ERATO code are described; which concern convergence property of results, equilibrium codes, and machine optimization of ERATO code. It is concluded that irregularity on a convergence curve is not due to a fault of the ERATO code itself but due to inappropriate choice of the equilibrium calculation meshes. Also described are a code to calculate an equilibrium as a quasi-inverse problem and a code to calculate an equilibrium as a result of a transport process. Optimization of the code with respect to I/O operations reduced both CPU time and I/O time considerably. With the FACOM230-75 APU/CPU multiprocessor system, the performance is about 6 times as high as with the FACOM230-75 CPU, showing the effectiveness of a vector processing computer for the kind of MHD computations. This report is a summary of the material presented at the ERATO workshop 1979(ORNL), supplemented with some details. (author)

  17. Enhancement of Pre-and Post-Processing Capability of the CUPID code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jongtae; Park, Ik Kyu; Yoon, Hanyoung [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    To simulate heat transfer and fluid flow in a field with a complicated geometry, an unstructured mesh is popularly used. Most commercial CFD (Computational Fluid Dynamics) solvers are based on an unstructured mesh technology. An advantage of using unstructured meshes for a field simulation is reduced man-hours by automatic mesh generation compared to a traditional structured mesh generation, which requires a huge amount of man-hours to discretized a complex geometry. Initially, unstructured meshes that can be generated automatically are limited to regular cell elements such as tetrahedron, pyramid, prism, or hexahedron. The multi-dimensional multi-phase flow solver, CUPID, has been developed in the context of an unstructured mesh finite volume method (FVM). Its numerical formulation and programming structure is independent of the number of faces surrounding the computational cells. Thus, it can be easily extended into polyhedral unstructured meshes. In this study, new tools for enhancing the pre- and post-processing capabilities of CUPID are proposed. They are based on an open-source CFD tool box OpenFOAM. A goal of this study is an extension of the applicability of the CUPID code by improving the mesh and solution treatment of the code.

  18. Proposal to consistently apply the International Code of Nomenclature of Prokaryotes (ICNP) to names of the oxygenic photosynthetic bacteria (cyanobacteria), including those validly published under the International Code of Botanical Nomenclature (ICBN)/International Code of Nomenclature for algae, fungi and plants (ICN), and proposal to change Principle 2 of the ICNP.

    Science.gov (United States)

    Pinevich, Alexander V

    2015-03-01

    This taxonomic note was motivated by the recent proposal [Oren & Garrity (2014) Int J Syst Evol Microbiol 64, 309-310] to exclude the oxygenic photosynthetic bacteria (cyanobacteria) from the wording of General Consideration 5 of the International Code of Nomenclature of Prokaryotes (ICNP), which entails unilateral coverage of these prokaryotes by the International Code of Nomenclature for algae, fungi, and plants (ICN; formerly the International Code of Botanical Nomenclature, ICBN). On the basis of key viewpoints, approaches and rules in the systematics, taxonomy and nomenclature of prokaryotes it is reciprocally proposed to apply the ICNP to names of cyanobacteria including those validly published under the ICBN/ICN. For this purpose, a change to Principle 2 of the ICNP is proposed to enable validation of cyanobacterial names published under the ICBN/ICN rules. © 2015 IUMS.

  19. Implementation and use of Gaussian process meta model for sensitivity analysis of numerical models: application to a hydrogeological transport computer code

    International Nuclear Information System (INIS)

    Marrel, A.

    2008-01-01

    In the studies of environmental transfer and risk assessment, numerical models are used to simulate, understand and predict the transfer of pollutant. These computer codes can depend on a high number of uncertain input parameters (geophysical variables, chemical parameters, etc.) and can be often too computer time expensive. To conduct uncertainty propagation studies and to measure the importance of each input on the response variability, the computer code has to be approximated by a meta model which is build on an acceptable number of simulations of the code and requires a negligible calculation time. We focused our research work on the use of Gaussian process meta model to make the sensitivity analysis of the code. We proposed a methodology with estimation and input selection procedures in order to build the meta model in the case of a high number of inputs and with few simulations available. Then, we compared two approaches to compute the sensitivity indices with the meta model and proposed an algorithm to build prediction intervals for these indices. Afterwards, we were interested in the choice of the code simulations. We studied the influence of different sampling strategies on the predictiveness of the Gaussian process meta model. Finally, we extended our statistical tools to a functional output of a computer code. We combined a decomposition on a wavelet basis with the Gaussian process modelling before computing the functional sensitivity indices. All the tools and statistical methodologies that we developed were applied to the real case of a complex hydrogeological computer code, simulating radionuclide transport in groundwater. (author) [fr

  20. Gene-Auto: Automatic Software Code Generation for Real-Time Embedded Systems

    Science.gov (United States)

    Rugina, A.-E.; Thomas, D.; Olive, X.; Veran, G.

    2008-08-01

    This paper gives an overview of the Gene-Auto ITEA European project, which aims at building a qualified C code generator from mathematical models under Matlab-Simulink and Scilab-Scicos. The project is driven by major European industry partners, active in the real-time embedded systems domains. The Gene- Auto code generator will significantly improve the current development processes in such domains by shortening the time to market and by guaranteeing the quality of the generated code through the use of formal methods. The first version of the Gene-Auto code generator has already been released and has gone thought a validation phase on real-life case studies defined by each project partner. The validation results are taken into account in the implementation of the second version of the code generator. The partners aim at introducing the Gene-Auto results into industrial development by 2010.

  1. Coded diffraction system in X-ray crystallography using a boolean phase coded aperture approximation

    Science.gov (United States)

    Pinilla, Samuel; Poveda, Juan; Arguello, Henry

    2018-03-01

    Phase retrieval is a problem present in many applications such as optics, astronomical imaging, computational biology and X-ray crystallography. Recent work has shown that the phase can be better recovered when the acquisition architecture includes a coded aperture, which modulates the signal before diffraction, such that the underlying signal is recovered from coded diffraction patterns. Moreover, this type of modulation effect, before the diffraction operation, can be obtained using a phase coded aperture, just after the sample under study. However, a practical implementation of a phase coded aperture in an X-ray application is not feasible, because it is computationally modeled as a matrix with complex entries which requires changing the phase of the diffracted beams. In fact, changing the phase implies finding a material that allows to deviate the direction of an X-ray beam, which can considerably increase the implementation costs. Hence, this paper describes a low cost coded X-ray diffraction system based on block-unblock coded apertures that enables phase reconstruction. The proposed system approximates the phase coded aperture with a block-unblock coded aperture by using the detour-phase method. Moreover, the SAXS/WAXS X-ray crystallography software was used to simulate the diffraction patterns of a real crystal structure called Rhombic Dodecahedron. Additionally, several simulations were carried out to analyze the performance of block-unblock approximations in recovering the phase, using the simulated diffraction patterns. Furthermore, the quality of the reconstructions was measured in terms of the Peak Signal to Noise Ratio (PSNR). Results show that the performance of the block-unblock phase coded apertures approximation decreases at most 12.5% compared with the phase coded apertures. Moreover, the quality of the reconstructions using the boolean approximations is up to 2.5 dB of PSNR less with respect to the phase coded aperture reconstructions.

  2. Low-level waste shallow burial assessment code

    International Nuclear Information System (INIS)

    Fields, D.E.; Little, C.A.; Emerson, C.J.

    1981-01-01

    PRESTO (Prediction of Radiation Exposures from Shallow Trench Operationns) is a computer code developed under United States Environmental Protection Agency funding to evaluate possible health effects from radionuclide releases from shallow, radioctive-waste disposal trenches and from areas contaminated with operational spillage. The model is intended to predict radionuclide transport and the ensuing exposure and health impact to a stable, local population for a 1000-year period following closure of the burial grounds. Several classes of submodels are used in PRESTO to represent scheduled events, unit system responses, and risk evaluation processes. The code is modular to permit future expansion and refinement. Near-surface transport mechanisms considered in the PRESTO code are cap failure, cap erosion, farming or reclamation practices, human intrusion, chemical exchange within an active surface soil layer, contamination from trench overflow, and dilution by surface streams. Subsurface processes include infiltration and drainage into the trench, the ensuing solubilization of radionuclides, and chemical exchange between trench water and buried solids. Mechanisms leading to contaminated outflow include trench overflow and downwad vertical percolation. If the latter outflow reaches an aquifer, radiological exposure from irrigation or domestic consumption is considered. Airborne exposure terms are evaluated using the Gaussian plume atmospheric transport formulation as implemented by Fields and Miller

  3. Global Intersection of Long Non-Coding RNAs with Processed and Unprocessed Pseudogenes in the Human Genome

    Directory of Open Access Journals (Sweden)

    Michael John Milligan

    2016-03-01

    Full Text Available Pseudogenes are abundant in the human genome and had long been thought of purely as nonfunctional gene fossils. Recent observations point to a role for pseudogenes in regulating genes transcriptionally and post-transcriptionally in human cells. To computationally interrogate the network space of integrated pseudogene and long non-coding RNA regulation in the human transcriptome, we developed and implemented an algorithm to identify all long non-coding RNA (lncRNA transcripts that overlap the genomic spans, and specifically the exons, of any human pseudogenes in either sense or antisense orientation. As inputs to our algorithm, we imported three public repositories of pseudogenes: GENCODE v17 (processed and unprocessed, Ensembl 72; Retroposed Pseudogenes V5 (processed only and Yale Pseudo60 (processed and unprocessed, Ensembl 60; two public lncRNA catalogs: Broad Institute, GENCODE v17; NCBI annotated piRNAs; and NHGRI clinical variants. The data sets were retrieved from the UCSC Genome Database using the UCSC Table Browser. We identified 2277 loci containing exon-to-exon overlaps between pseudogenes, both processed and unprocessed, and long non-coding RNA genes. Of these loci we identified 1167 with Genbank EST and full-length cDNA support providing direct evidence of transcription on one or both strands with exon-to-exon overlaps. The analysis converged on 313 pseudogene-lncRNA exon-to-exon overlaps that were bidirectionally supported by both full-length cDNAs and ESTs. In the process of identifying transcribed pseudogenes, we generated a comprehensive, positionally non-redundant encyclopedia of human pseudogenes, drawing upon multiple, and formerly disparate public pseudogene repositories. Collectively, these observations suggest that pseudogenes are pervasively transcribed on both strands and are common drivers of gene regulation.

  4. Nyx: Adaptive mesh, massively-parallel, cosmological simulation code

    Science.gov (United States)

    Almgren, Ann; Beckner, Vince; Friesen, Brian; Lukic, Zarija; Zhang, Weiqun

    2017-12-01

    Nyx code solves equations of compressible hydrodynamics on an adaptive grid hierarchy coupled with an N-body treatment of dark matter. The gas dynamics in Nyx use a finite volume methodology on an adaptive set of 3-D Eulerian grids; dark matter is represented as discrete particles moving under the influence of gravity. Particles are evolved via a particle-mesh method, using Cloud-in-Cell deposition/interpolation scheme. Both baryonic and dark matter contribute to the gravitational field. In addition, Nyx includes physics for accurately modeling the intergalactic medium; in optically thin limits and assuming ionization equilibrium, the code calculates heating and cooling processes of the primordial-composition gas in an ionizing ultraviolet background radiation field.

  5. Computer modelling of the WWER fuel elements under high burnup conditions by the computer codes PIN-W and RODQ2D

    Energy Technology Data Exchange (ETDEWEB)

    Valach, M; Zymak, J; Svoboda, R [Nuclear Research Inst. Rez plc, Rez (Czech Republic)

    1997-08-01

    This paper presents the development status of the computer codes for the WWER fuel elements thermomechanical behavior modelling under high burnup conditions at the Nuclear Research Institute Rez. The accent is given on the analysis of the results from the parametric calculations, performed by the programmes PIN-W and RODQ2D, rather than on their detailed theoretical description. Several new optional correlations for the UO2 thermal conductivity with degradation effect caused by burnup were implemented into the both codes. Examples of performed calculations document differences between previous and new versions of both programmes. Some recommendations for further development of the codes are given in conclusion. (author). 6 refs, 9 figs.

  6. Computer modelling of the WWER fuel elements under high burnup conditions by the computer codes PIN-W and RODQ2D

    International Nuclear Information System (INIS)

    Valach, M.; Zymak, J.; Svoboda, R.

    1997-01-01

    This paper presents the development status of the computer codes for the WWER fuel elements thermomechanical behavior modelling under high burnup conditions at the Nuclear Research Institute Rez. The accent is given on the analysis of the results from the parametric calculations, performed by the programmes PIN-W and RODQ2D, rather than on their detailed theoretical description. Several new optional correlations for the UO2 thermal conductivity with degradation effect caused by burnup were implemented into the both codes. Examples of performed calculations document differences between previous and new versions of both programmes. Some recommendations for further development of the codes are given in conclusion. (author). 6 refs, 9 figs

  7. Barriers to data quality resulting from the process of coding health information to administrative data: a qualitative study.

    Science.gov (United States)

    Lucyk, Kelsey; Tang, Karen; Quan, Hude

    2017-11-22

    Administrative health data are increasingly used for research and surveillance to inform decision-making because of its large sample sizes, geographic coverage, comprehensivity, and possibility for longitudinal follow-up. Within Canadian provinces, individuals are assigned unique personal health numbers that allow for linkage of administrative health records in that jurisdiction. It is therefore necessary to ensure that these data are of high quality, and that chart information is accurately coded to meet this end. Our objective is to explore the potential barriers that exist for high quality data coding through qualitative inquiry into the roles and responsibilities of medical chart coders. We conducted semi-structured interviews with 28 medical chart coders from Alberta, Canada. We used thematic analysis and open-coded each transcript to understand the process of administrative health data generation and identify barriers to its quality. The process of generating administrative health data is highly complex and involves a diverse workforce. As such, there are multiple points in this process that introduce challenges for high quality data. For coders, the main barriers to data quality occurred around chart documentation, variability in the interpretation of chart information, and high quota expectations. This study illustrates the complex nature of barriers to high quality coding, in the context of administrative data generation. The findings from this study may be of use to data users, researchers, and decision-makers who wish to better understand the limitations of their data or pursue interventions to improve data quality.

  8. Quality assurance and verification of the MACCS [MELCOR Accident Consequence Code System] code, Version 1.5

    International Nuclear Information System (INIS)

    Dobbe, C.A.; Carlson, E.R.; Marshall, N.H.; Marwil, E.S.; Tolli, J.E.

    1990-02-01

    An independent quality assurance (QA) and verification of Version 1.5 of the MELCOR Accident Consequence Code System (MACCS) was performed. The QA and verification involved examination of the code and associated documentation for consistent and correct implementation of the models in an error-free FORTRAN computer code. The QA and verification was not intended to determine either the adequacy or appropriateness of the models that are used MACCS 1.5. The reviews uncovered errors which were fixed by the SNL MACCS code development staff prior to the release of MACCS 1.5. Some difficulties related to documentation improvement and code restructuring are also presented. The QA and verification process concluded that Version 1.5 of the MACCS code, within the scope and limitations process concluded that Version 1.5 of the MACCS code, within the scope and limitations of the models implemented in the code is essentially error free and ready for widespread use. 15 refs., 11 tabs

  9. Synthesizing Certified Code

    Science.gov (United States)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  10. Parallel processing Monte Carlo radiation transport codes

    International Nuclear Information System (INIS)

    McKinney, G.W.

    1994-01-01

    Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine

  11. Coding for optical channels

    CERN Document Server

    Djordjevic, Ivan; Vasic, Bane

    2010-01-01

    This unique book provides a coherent and comprehensive introduction to the fundamentals of optical communications, signal processing and coding for optical channels. It is the first to integrate the fundamentals of coding theory and optical communication.

  12. Enhancing Image Processing Performance for PCID in a Heterogeneous Network of Multi-code Processors

    Science.gov (United States)

    Linderman, R.; Spetka, S.; Fitzgerald, D.; Emeny, S.

    The Physically-Constrained Iterative Deconvolution (PCID) image deblurring code is being ported to heterogeneous networks of multi-core systems, including Intel Xeons and IBM Cell Broadband Engines. This paper reports results from experiments using the JAWS supercomputer at MHPCC (60 TFLOPS of dual-dual Xeon nodes linked with Infiniband) and the Cell Cluster at AFRL in Rome, NY. The Cell Cluster has 52 TFLOPS of Playstation 3 (PS3) nodes with IBM Cell Broadband Engine multi-cores and 15 dual-quad Xeon head nodes. The interconnect fabric includes Infiniband, 10 Gigabit Ethernet and 1 Gigabit Ethernet to each of the 336 PS3s. The results compare approaches to parallelizing FFT executions across the Xeons and the Cell's Synergistic Processing Elements (SPEs) for frame-level image processing. The experiments included Intel's Performance Primitives and Math Kernel Library, FFTW3.2, and Carnegie Mellon's SPIRAL. Optimization of FFTs in the PCID code led to a decrease in relative processing time for FFTs. Profiling PCID version 6.2, about one year ago, showed the 13 functions that accounted for the highest percentage of processing were all FFT processing functions. They accounted for over 88% of processing time in one run on Xeons. FFT optimizations led to improvement in the current PCID version 8.0. A recent profile showed that only two of the 19 functions with the highest processing time were FFT processing functions. Timing measurements showed that FFT processing for PCID version 8.0 has been reduced to less than 19% of overall processing time. We are working toward a goal of scaling to 200-400 cores per job (1-2 imagery frames/core). Running a pair of cores on each set of frames reduces latency by implementing parallel FFT processing. Our current results show scaling well out to 100 pairs of cores. These results support the next higher level of parallelism in PCID, where groups of several hundred frames each producing one resolved image are sent to cliques of several

  13. Proposal of flexible atomic and molecular process management for Monte Carlo impurity transport code based on object oriented method

    International Nuclear Information System (INIS)

    Asano, K.; Ohno, N.; Takamura, S.

    2001-01-01

    Monte Carlo simulation code on impurity transport has been developed by several groups to be utilized mainly for fusion related edge plasmas. State of impurity particle is determined by atomic and molecular processes such as ionization, charge exchange in plasma. A lot of atomic and molecular processes have been considered because the edge plasma has not only impurity atoms, but also impurity molecules mainly related to chemical erosion of carbon materials, and their cross sections have been given experimentally and theoretically. We need to reveal which process is essential in a given edge plasma condition. Monte Carlo simulation code, which takes such various atomic and molecular processes into account, is necessary to investigate the behavior of impurity particle in plasmas. Usually, the impurity transport simulation code has been intended for some specific atomic and molecular processes so that the introduction of a new process forces complicated programming work. In order to evaluate various proposed atomic and molecular processes, a flexible management of atomic and molecular reaction should be established. We have developed the impurity transport simulation code based on object-oriented method. By employing object-oriented programming, we can handle each particle as 'object', which enfolds data and procedure function itself. A user (notice, not programmer) can define property of each particle species and the related atomic and molecular processes and then each 'object' is defined by analyzing this information. According to the relation among plasma particle species, objects are connected with each other and change their state by themselves. Dynamic allocation of these objects to program memory is employed to adapt for arbitrary number of species and atomic/molecular reactions. Thus we can treat arbitrary species and process starting from, for instance, methane and acetylene. Such a software procedure would be useful also for industrial application plasmas

  14. User's manual for a measurement simulation code

    International Nuclear Information System (INIS)

    Kern, E.A.

    1982-07-01

    The MEASIM code has been developed primarily for modeling process measurements in materials processing facilities associated with the nuclear fuel cycle. In addition, the code computes materials balances and the summation of materials balances along with associated variances. The code has been used primarily in performance assessment of materials' accounting systems. This report provides the necessary information for a potential user to employ the code in these applications. A number of examples that demonstrate most of the capabilities of the code are provided

  15. Analytical functions used for description of the plastic deformation process in Zirconium alloys WWER type fuel rod cladding under designed accident conditions

    International Nuclear Information System (INIS)

    Fedotov, A.

    2003-01-01

    The aim of this work was to improve the RAPTA-5 code as applied to the analysis of the thermomechanical behavior of the fuel rod cladding under designed accident conditions. The irreversible process thermodynamics methods were proposed to be used for the description of the plastic deformation process in zirconium alloys under accident conditions. Functions, which describe yielding stress dependence on plastic strain, strain rate and temperature may be successfully used in calculations. On the basis of the experiments made and the existent experimental data the dependence of yielding stress on plastic strain, strain rate, temperature and heating rate for E110 alloy was determined. In future the following research work shall be made: research of dynamic strain ageing in E635 alloy under different strain rates; research of strain rate influence on plastic strain in E635 alloy under test temperature higher than 873 K; research of deformation strengthening of E635 alloy under high temperatures; research of heating rate influence n phase transformation in E110 and E635 alloys

  16. Storage Tanks - Selection Of Type, Design Code And Tank Sizing

    International Nuclear Information System (INIS)

    Shatla, M.N; El Hady, M.

    2004-01-01

    The present work gives an insight into the proper selection of type, design code and sizing of storage tanks used in the Petroleum and Process industries. In this work, storage tanks are classified based on their design conditions. Suitable design codes and their limitations are discussed for each tank type. The option of storage under high pressure and ambient temperature, in spherical and cigar tanks, is compared to the option of storage under low temperature and slight pressure (close to ambient) in low temperature and cryogenic tanks. The discussion is extended to the types of low temperature and cryogenic tanks and recommendations are given to select their types. A study of pressurized tanks designed according to ASME code, conducted in the present work, reveals that tanks designed according to ASME Section VIII DIV 2 provides cost savings over tanks designed according to ASME Section VIII DlV 1. The present work is extended to discuss the parameters that affect sizing of flat bottom cylindrical tanks. The analysis shows the effect of height-to-diameter ratio on tank instability and foundation loads

  17. Probe code: a set of programs for processing and analysis of the left ventricular function - User's manual

    International Nuclear Information System (INIS)

    Piva, R.M.V.

    1987-01-01

    The User's Manual of the Probe Code is an addendum to the M.Sc. thesis entitled A Microcomputer System of Nuclear Probe to Check the Left Ventricular Function. The Probe Code is a software which was developed for processing and off-line analysis curves from the Left Ventricular Function, that were obtained in vivo. These curves are produced by means of an external scintigraph probe, which was collimated and put on the left ventricule, after a venous inoculation of Tc-99 m. (author)

  18. Channel coding techniques for wireless communications

    CERN Document Server

    Deergha Rao, K

    2015-01-01

    The book discusses modern channel coding techniques for wireless communications such as turbo codes, low-density parity check (LDPC) codes, space–time (ST) coding, RS (or Reed–Solomon) codes and convolutional codes. Many illustrative examples are included in each chapter for easy understanding of the coding techniques. The text is integrated with MATLAB-based programs to enhance the understanding of the subject’s underlying theories. It includes current topics of increasing importance such as turbo codes, LDPC codes, Luby transform (LT) codes, Raptor codes, and ST coding in detail, in addition to the traditional codes such as cyclic codes, BCH (or Bose–Chaudhuri–Hocquenghem) and RS codes and convolutional codes. Multiple-input and multiple-output (MIMO) communications is a multiple antenna technology, which is an effective method for high-speed or high-reliability wireless communications. PC-based MATLAB m-files for the illustrative examples are provided on the book page on Springer.com for free dow...

  19. Film grain noise modeling in advanced video coding

    Science.gov (United States)

    Oh, Byung Tae; Kuo, C.-C. Jay; Sun, Shijun; Lei, Shawmin

    2007-01-01

    A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand, video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we present a method to remove film grain noise from image/video without distorting its original content. Besides, we describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The proposed model generates the film grain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed scheme.

  20. EPIC: an Error Propagation/Inquiry Code

    International Nuclear Information System (INIS)

    Baker, A.L.

    1985-01-01

    The use of a computer program EPIC (Error Propagation/Inquiry Code) will be discussed. EPIC calculates the variance of a materials balance closed about a materials balance area (MBA) in a processing plant operated under steady-state conditions. It was designed for use in evaluating the significance of inventory differences in the Department of Energy (DOE) nuclear plants. EPIC rapidly estimates the variance of a materials balance using average plant operating data. The intent is to learn as much as possible about problem areas in a process with simple straightforward calculations assuming a process is running in a steady-state mode. EPIC is designed to be used by plant personnel or others with little computer background. However, the user should be knowledgeable about measurement errors in the system being evaluated and have a limited knowledge of how error terms are combined in error propagation analyses. EPIC contains six variance equations; the appropriate equation is used to calculate the variance at each measurement point. After all of these variances are calculated, the total variance for the MBA is calculated using a simple algebraic sum of variances. The EPIC code runs on any computer that accepts a standard form of the BASIC language. 2 refs., 1 fig., 6 tabs

  1. APPC - A new standardised coding system for trans-organisational PACS retrieval

    International Nuclear Information System (INIS)

    Fruehwald, F.; Lindner, A.; Mostbeck, G.; Hruby, W.; Fruehwald-Pallamar, J.

    2010-01-01

    As part of a general strategy to integrate the health care enterprise, Austria plans to connect the Picture Archiving and Communication Systems (PACS) of all radiological institutions into a nationwide network. To facilitate the search for relevant correlative imaging data in the PACS of different organisations, a coding system was compiled for all radiological procedures and necessary anatomical details. This code, called the Austrian PACS Procedure Code (APPC), was granted the status of a standard under HL7. Examples are provided of effective coding and filtering when searching for relevant imaging material using the APPC, as well as the planned process for future adjustments of the APPC. The implementation and how the APPC will fit into the future electronic environment, which will include an electronic health act for all citizens in Austria, are discussed. A comparison to other nationwide electronic health record projects and coding systems is given. Limitations and possible use in physical storage media are contemplated. (orig.)

  2. Development of RETRAN-03/MOV code for thermal-hydraulic analysis of nuclear reactor under moving conditions

    International Nuclear Information System (INIS)

    Kim, Hak Jae; Park, Goon Cherl

    1996-01-01

    Nuclear ship reactors have several; features different from land-based PWR's. Especially, effects of ship motions on reactor thermal-hydraulics and good load following capability for abrupt load changes are essential characteristics of nuclear ship reactors. This study modified the RETRAN-03 to analyze the thermal-hydraulic transients under three-dimensional ship motions, named RETRAN-03/MOV in order to apply to future marine reactors. First Japanese nuclear ship MUTSU reactor have been analyzed under various ship motions to verify this code. Calculations have been performed under rolling,heaving and stationary inclination conditions during normal operation. Also, the natural circulation has been analyzed, which can provide the decay heat removed to ensure the passive safety of marine reactors. As results, typical thermal-hydraulic characteristics of marine reactors such as flow rate oscillations and S/G water level oscillations have been successfully simulated at various conditions. 7 refs., 11 figs. (author)

  3. Nationwide Risk-Based PCB Remediation Waste Disposal Approvals under Title 40 of the Code of Federal Regulations (CFR) Section 761.61(c)

    Science.gov (United States)

    This page contains information about Nationwide Risk-Based Polychlorinated Biphenyls (PCBs) Remediation Waste Disposal Approvals under Title 40 of the Code of Federal Regulations (CFR) Section 761.61(c)

  4. The levels of processing effect under nitrogen narcosis.

    Science.gov (United States)

    Kneller, Wendy; Hobbs, Malcolm

    2013-01-01

    Previous research has consistently demonstrated that inert gas (nitrogen) narcosis affects free recall but not recognition memory in the depth range of 30 to 50 meters of sea water (msw), possibly as a result of narcosis preventing processing when learned material is encoded. The aim of the current research was to test this hypothesis by applying a levels of processing approach to the measurement of free recall under narcosis. Experiment 1 investigated the effect of depth (0-2 msw vs. 37-39 msw) and level of processing (shallow vs. deep) on free recall memory performance in 67 divers. When age was included as a covariate, recall was significantly worse in deep water (i.e., under narcosis), compared to shallow water, and was significantly higher in the deep processing compared to shallow processing conditions in both depth conditions. Experiment 2 demonstrated that this effect was not simply due to the different underwater environments used for the depth conditions in Experiment 1. It was concluded memory performance can be altered by processing under narcosis and supports the contention that narcosis affects the encoding stage of memory as opposed to self-guided search (retrieval).

  5. Neptune: An astrophysical smooth particle hydrodynamics code for massively parallel computer architectures

    Science.gov (United States)

    Sandalski, Stou

    Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.

  6. International Code Assessment and Applications Program: Annual report

    International Nuclear Information System (INIS)

    Ting, P.; Hanson, R.; Jenks, R.

    1987-03-01

    This is the first annual report of the International Code Assessment and Applications Program (ICAP). The ICAP was organized by the Office of Nuclear Regulatory Research, United States Nuclear Regulatory Commission (USNRC) in 1985. The ICAP is an international cooperative reactor safety research program planned to continue over a period of approximately five years. To date, eleven European and Asian countries/organizations have joined the program through bilateral agreements with the USNRC. Seven proposed agreements are currently under negotiation. The primary mission of the ICAP is to provide independent assessment of the three major advanced computer codes (RELAP5, TRAC-PWR, and TRAC-BWR) developed by the USNRC. However, program activities can be expected to enhance the assessment process throughout member countries. The codes were developed to calculate the reactor plant response to transients and loss-of-coolant accidents. Accurate prediction of normal and abnormal plant response using the codes enhances procedures and regulations used for the safe operation of the plant and also provides technical basis for assessing the safety margin of future reactor plant designs. The ICAP is providing required assessment data that will contribute to quantification of the code uncertainty for each code. The first annual report is devoted to coverage of program activities and accomplishments during the period between April 1985 and March 1987

  7. Advanced thermohydraulic simulation code for transients in LMFBRs (SSC-L code)

    Energy Technology Data Exchange (ETDEWEB)

    Agrawal, A.K.

    1978-02-01

    Physical models for various processes that are encountered in preaccident and transient simulation of thermohydraulic transients in the entire liquid metal fast breeder reactor (LMFBR) plant are described in this report. A computer code, SSC-L, was written as a part of the Super System Code (SSC) development project for the ''loop''-type designs of LMFBRs. This code has the self-starting capability, i.e., preaccident or steady-state calculations are performed internally. These results then serve as the starting point for the transient simulation.

  8. Advanced thermohydraulic simulation code for transients in LMFBRs (SSC-L code)

    International Nuclear Information System (INIS)

    Agrawal, A.K.

    1978-02-01

    Physical models for various processes that are encountered in preaccident and transient simulation of thermohydraulic transients in the entire liquid metal fast breeder reactor (LMFBR) plant are described in this report. A computer code, SSC-L, was written as a part of the Super System Code (SSC) development project for the ''loop''-type designs of LMFBRs. This code has the self-starting capability, i.e., preaccident or steady-state calculations are performed internally. These results then serve as the starting point for the transient simulation

  9. Dual Coding, Reasoning and Fallacies.

    Science.gov (United States)

    Hample, Dale

    1982-01-01

    Develops the theory that a fallacy is not a comparison of a rhetorical text to a set of definitions but a comparison of one person's cognition with another's. Reviews Paivio's dual coding theory, relates nonverbal coding to reasoning processes, and generates a limited fallacy theory based on dual coding theory. (PD)

  10. Monomial codes seen as invariant subspaces

    Directory of Open Access Journals (Sweden)

    García-Planas María Isabel

    2017-08-01

    Full Text Available It is well known that cyclic codes are very useful because of their applications, since they are not computationally expensive and encoding can be easily implemented. The relationship between cyclic codes and invariant subspaces is also well known. In this paper a generalization of this relationship is presented between monomial codes over a finite field and hyperinvariant subspaces of n under an appropriate linear transformation. Using techniques of Linear Algebra it is possible to deduce certain properties for this particular type of codes, generalizing known results on cyclic codes.

  11. V.S.O.P. (99/05) computer code system

    International Nuclear Information System (INIS)

    Ruetten, H.J.; Haas, K.A.; Brockmann, H.; Scherer, W.

    2005-11-01

    V.S.O.P. is a computer code system for the comprehensive numerical simulation of the physics of thermal reactors. It implies the setup of the reactor and of the fuel element, processing of cross sections, neutron spectrum evaluation, neutron diffusion calculation in two or three dimensions, fuel burnup, fuel shuffling, reactor control, thermal hydraulics and fuel cycle costs. The thermal hydraulics part (steady state and time-dependent) is restricted to HTRs and to two spatial dimensions. The code can simulate the reactor operation from the initial core towards the equilibrium core. V.S.O.P.(99 / 05) represents the further development of V.S.O.P. (99). Compared to its precursor, the code system has been improved in many details. Major improvements and extensions have been included concerning the neutron spectrum calculation, the 3-d neutron diffusion options, and the thermal hydraulic section with respect to 'multi-pass'-fuelled pebblebed cores. This latest code version was developed and tested under the WINDOWS-XP - operating system. The storage requirement for the executables and the basic libraries associated with the code amounts to about 15 MB. Another 5 MB are required - if desired - for storage of the source code (∼65000 Fortran statements). (orig.)

  12. V.S.O.P. (99/05) computer code system

    Energy Technology Data Exchange (ETDEWEB)

    Ruetten, H.J.; Haas, K.A.; Brockmann, H.; Scherer, W.

    2005-11-01

    V.S.O.P. is a computer code system for the comprehensive numerical simulation of the physics of thermal reactors. It implies the setup of the reactor and of the fuel element, processing of cross sections, neutron spectrum evaluation, neutron diffusion calculation in two or three dimensions, fuel burnup, fuel shuffling, reactor control, thermal hydraulics and fuel cycle costs. The thermal hydraulics part (steady state and time-dependent) is restricted to HTRs and to two spatial dimensions. The code can simulate the reactor operation from the initial core towards the equilibrium core. V.S.O.P.(99 / 05) represents the further development of V.S.O.P. (99). Compared to its precursor, the code system has been improved in many details. Major improvements and extensions have been included concerning the neutron spectrum calculation, the 3-d neutron diffusion options, and the thermal hydraulic section with respect to 'multi-pass'-fuelled pebblebed cores. This latest code version was developed and tested under the WINDOWS-XP - operating system. The storage requirement for the executables and the basic libraries associated with the code amounts to about 15 MB. Another 5 MB are required - if desired - for storage of the source code ({approx}65000 Fortran statements). (orig.)

  13. Computer codes for the calculation of vibrations in machines and structures

    International Nuclear Information System (INIS)

    1989-01-01

    After an introductory paper on the typical requirements to be met by vibration calculations, the first two sections of the conference papers present universal as well as specific finite-element codes tailored to solve individual problems. The calculation of dynamic processes increasingly now in addition to the finite elements applies the method of multi-component systems which takes into account rigid bodies or partial structures and linking and joining elements. This method, too, is explained referring to universal computer codes and to special versions. In mechanical engineering, rotary vibrations are a major problem, and under this topic, conference papers exclusively deal with codes that also take into account special effects such as electromechanical coupling, non-linearities in clutches, etc. (orig./HP) [de

  14. Thermal-hydraulic analysis code development and application to passive safety reactor at JAERI

    International Nuclear Information System (INIS)

    Araya, F.

    1995-01-01

    After a brief overview of safety assessment process, the author describes the LOCA analysis code system developed in JAERI. It comprises audit calculation code (WREM, WREM-J2, Japanese own code and BE codes (2D/3D, ICAP, ROSA). The codes are applied to development of Japanese passive safety reactor concept JPSR. Special attention is paid to the passive heat removal system and phenomena considered to occur under loss of heat sink event. Examples of LOCA analysis based on operation of JPSR for the cases of heat removal by upper RHR and heat removal from core to atmosphere are given. Experiments for multi-dimensional flow field in RPV and steam condensation in water pool are used for understanding the phenomena in passive safety reactors. The report is in a poster form only. 1 tab., 13 figs

  15. Optical image encryption based on real-valued coding and subtracting with the help of QR code

    Science.gov (United States)

    Deng, Xiaopeng

    2015-08-01

    A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.

  16. Analyses and computer code developments for accident-induced thermohydraulic transients in water-cooled nuclear reactor systems

    International Nuclear Information System (INIS)

    Wulff, W.

    1977-01-01

    A review is presented on the development of analyses and computer codes for the prediction of thermohydraulic transients in nuclear reactor systems. Models for the dynamics of two-phase mixtures are summarized. Principles of process, reactor component and reactor system modeling are presented, as well as the verification of these models by comparing predicted results with experimental data. Codes of major importance are described, which have recently been developed or are presently under development. The characteristics of these codes are presented in terms of governing equations, solution techniques and code structure. Current efforts and problems of code verification are discussed. A summary is presented of advances which are necessary for reducing the conservatism currently implied in reactor hydraulics codes for safety assessment

  17. TRIO-EF a general thermal hydraulics computer code applied to the Avlis process

    International Nuclear Information System (INIS)

    Magnaud, J.P.; Claveau, M.; Coulon, N.; Yala, P.; Guilbaud, D.; Mejane, A.

    1993-01-01

    TRIO(EF is a general purpose Fluid Mechanics 3D Finite Element Code. The system capabilities cover areas such as steady state or transient, laminar or turbulent, isothermal or temperature dependent fluid flows; it is applicable to the study of coupled thermo-fluid problems involving heat conduction and possibly radiative heat transfer. It has been used to study the thermal behaviour of the AVLIS process separation module. In this process, a linear electron beam impinges the free surface of a uranium ingot, generating a two dimensional curtain emission of vapour from a water-cooled crucible. The energy transferred to the metal causes its partial melting, forming a pool where strong convective motion increases heat transfer towards the crucible. In the upper part of the Separation Module, the internal structures are devoted to two main functions: vapor containment and reflux, irradiation and physical separation. They are subjected to very high temperature levels and heat transfer occurs mainly by radiation. Moreover, special attention has to be paid to electron backscattering. These two major points have been simulated numerically with TRIO-EF and the paper presents and comments the results of such a computation, for each of them. After a brief overview of the computer code, two examples of the TRIO-EF capabilities are given: a crucible thermal hydraulics model, a thermal analysis of the internal structures

  18. Validation of the Thermal-Hydraulic Model in the SACAP Code with the ISP Tests

    Energy Technology Data Exchange (ETDEWEB)

    Park, Soon-Ho; Kim, Dong-Min; Park, Chang-Hwan [FNC Technology Co., Yongin (Korea, Republic of)

    2016-10-15

    In safety viewpoint, the pressure of the containment is the important parameter, of course, the local hydrogen concentration is also the parameter of the major concern because of its flammability and the risk of the detonation. In Korea, there have been an extensive efforts to develop the computer code which can analyze the severe accident behavior of the pressurized water reactor. The development has been done in a modularized manner and SACAP(Severe Accident Containment Analysis Package) code is now under final stage of development. SACAP code adopts LP(Lumped Parameter) model and is applicable to analyze the synthetic behavior of the containment during severe accident occurred by thermal-hydraulic transient, combustible gas burn, direct containment heating by high pressure melt ejection, steam explosion and molten core-concrete interaction. The analyses of a number of ISP(International Standard Problem) experiments were done as a part of the SACAP code V and V(verification and validation). In this paper, the SACAP analysis results for ISP-35 NUPEC and ISP-47 TOSQAN are presented including comparison with other existing NPP simulation codes. In this paper, we selected and analyzed ISP-35 NUPEC, ISP-47 TOSQAN in order to confirm the computational performance of SACAP code currently under development. Now the multi-node analysis for the ISP-47 is under process. As a result of simulation, SACAP predicts well the thermal-hydraulic variables such as temperature, pressure, etc. Also, we verify that SACAP code is properly equipped to analyze the gas distribution and condensation.

  19. SCDAP/RELAP5/MOD3 code development

    International Nuclear Information System (INIS)

    Allison, C.M.; Siefken, J.L.; Coryell, E.W.

    1992-01-01

    The SCOAP/RELAP5/MOD3 computer code is designed to describe the overall reactor coolant system (RCS) thermal-hydraulic response, core damage progression, and fission product release and transport during severe accidents. The code is being developed at the Idaho National Engineering Laboratory (INEL) under the primary sponsorship of the Office of Nuclear Regulatory Research of the US Nuclear Regulatory Commission (NRC). Code development activities are currently focused on three main areas - (a) code usability, (b) early phase melt progression model improvements, and (c) advanced reactor thermal-hydraulic model extensions. This paper describes the first two activities. A companion paper describes the advanced reactor model improvements being performed under RELAP5/MOD3 funding

  20. Nationwide Enviro Jet PCB Decontamination Approval and Notifications under Title 40 of the Code of Federal Regulations (CFR) Section 761.79(h)

    Science.gov (United States)

    This page contains information about approvals and notifications for Enviro Jet to Decontaminate PCB-contaminated natural gas pipelines under Title 40 of the Code of Federal Regulations (CFR) Section 761.79(h)

  1. Scaling of Thermal-Hydraulic Phenomena and System Code Assessment

    International Nuclear Information System (INIS)

    Wolfert, K.

    2008-01-01

    In the last five decades large efforts have been undertaken to provide reliable thermal-hydraulic system codes for the analyses of transients and accidents in nuclear power plants. Many separate effects tests and integral system tests were carried out to establish a data base for code development and code validation. In this context the question has to be answered, to what extent the results of down-scaled test facilities represent the thermal-hydraulic behaviour expected in a full-scale nuclear reactor under accidental conditions. Scaling principles, developed by many scientists and engineers, present a scientific technical basis and give a valuable orientation for the design of test facilities. However, it is impossible for a down-scaled facility to reproduce all physical phenomena in the correct temporal sequence and in the kind and strength of their occurrence. The designer needs to optimize a down-scaled facility for the processes of primary interest. This leads compulsorily to scaling distortions of other processes with less importance. Taking into account these weak points, a goal oriented code validation strategy is required, based on the analyses of separate effects tests and integral system tests as well as transients occurred in full-scale nuclear reactors. The CSNI validation matrices are an excellent basis for the fulfilling of this task. Separate effects tests in full scale play here an important role.

  2. How to review 4 million lines of ATLAS code

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00226135; The ATLAS collaboration; Lampl, Walter

    2017-01-01

    As the ATLAS Experiment prepares to move to a multi-threaded framework (AthenaMT) for Run3, we are faced with the problem of how to migrate 4 million lines of C++ source code. This code has been written over the past 15 years and has often been adapted, re-written or extended to the changing requirements and circumstances of LHC data taking. The code was developed by different authors, many of whom are no longer active, and under the deep assumption that processing ATLAS data would be done in a serial fashion. In order to understand the scale of the problem faced by the ATLAS software community, and to plan appropriately the significant efforts posed by the new AthenaMT framework, ATLAS embarked on a wide ranging review of our offline code, covering all areas of activity: event generation, simulation, trigger, reconstruction. We discuss the difficulties in even logistically organising such reviews in an already busy community, how to examine areas in sufficient depth to learn key areas in need of upgrade, yet...

  3. How To Review 4 Million Lines of ATLAS Code

    CERN Document Server

    Stewart, Graeme; The ATLAS collaboration

    2016-01-01

    As the ATLAS Experiment prepares to move to a multi-threaded framework (AthenaMT) for Run3, we are faced with the problem of how to migrate 4 million lines of C++ source code. This code has been written over the past 15 years and has often been adapted, re-written or extended to the changing requirements and circumstances of LHC data taking. The code was developed by different authors, many of whom are no longer active, and under the deep assumption that processing ATLAS data would be done in a serial fashion. In order to understand the scale of the problem faced by the ATLAS software community, and to plan appropriately the significant efforts posed by the new AthenaMT framework, ATLAS embarked on a wide ranging review of our offline code, covering all areas of activity: event generation, simulation, trigger, reconstruction. We discuss the difficulties in even logistically organising such reviews in an already busy community, how to examine areas in sufficient depth to learn key areas in need of upgrade, yet...

  4. Tri-code inductance control rod position indicator with several multi-coding-bars

    International Nuclear Information System (INIS)

    Shi Jibin; Jiang Yueyuan; Wang Wenran

    2004-01-01

    A control rod position indicator named as tri-code inductance control rod position indicator with multi-coding-bars, which possesses simple structure, reliable operation and high precision, is developed. The detector of the indicator is composed of K coils, a compensatory coil and K coding bars. Each coding bar consists of several sections of strong magnetic cores, several sections of weak magnetic cores and several sections of non-magnetic portions. As the control rod is withdrawn, the coding bars move in the center of the coils respectively, while the constant alternating current passes the coils and makes them to create inductance alternating voltage signals. The outputs of the coils are picked and processed, and the tri-codes indicating rod position can be gotten. Moreover, the coding principle of the detector and its related structure are introduced. The analysis shows that the indicator owns more advantage over the coils-coding rod position indicator, so it can meet the demands of the rod position indicating in nuclear heating reactor (NHR). (authors)

  5. A multidisciplinary audit of clinical coding accuracy in otolaryngology: financial, managerial and clinical governance considerations under payment-by-results.

    Science.gov (United States)

    Nouraei, S A R; O'Hanlon, S; Butler, C R; Hadovsky, A; Donald, E; Benjamin, E; Sandhu, G S

    2009-02-01

    To audit the accuracy of otolaryngology clinical coding and identify ways of improving it. Prospective multidisciplinary audit, using the 'national standard clinical coding audit' methodology supplemented by 'double-reading and arbitration'. Teaching-hospital otolaryngology and clinical coding departments. Otolaryngology inpatient and day-surgery cases. Concordance between initial coding performed by a coder (first cycle) and final coding by a clinician-coder multidisciplinary team (MDT; second cycle) for primary and secondary diagnoses and procedures, and Health Resource Groupings (HRG) assignment. 1250 randomly-selected cases were studied. Coding errors occurred in 24.1% of cases (301/1250). The clinician-coder MDT reassigned 48 primary diagnoses and 186 primary procedures and identified a further 209 initially-missed secondary diagnoses and procedures. In 203 cases, patient's initial HRG changed. Incorrect coding caused an average revenue loss of 174.90 pounds per patient (14.7%) of which 60% of the total income variance was due to miscoding of a eight highly-complex head and neck cancer cases. The 'HRG drift' created the appearance of disproportionate resource utilisation when treating 'simple' cases. At our institution the total cost of maintaining a clinician-coder MDT was 4.8 times lower than the income regained through the double-reading process. This large audit of otolaryngology practice identifies a large degree of error in coding on discharge. This leads to significant loss of departmental revenue, and given that the same data is used for benchmarking and for making decisions about resource allocation, it distorts the picture of clinical practice. These can be rectified through implementing a cost-effective clinician-coder double-reading multidisciplinary team as part of a data-assurance clinical governance framework which we recommend should be established in hospitals.

  6. Code of practice for ionizing radiation

    International Nuclear Information System (INIS)

    Khoo Boo Huat

    1995-01-01

    Prior to 1984, the use of ionizing radiation in Malaysia was governed by the Radioactive Substances Act of 1968. After 1984, its use came under the control of Act 304, called the Atomic Energy Licensing Act 1984. Under powers vested by the Act, the Radiation Protection (Basic Safety Standards) Regulations 1988 were formulated to regulate its use. These Acts do not provide information on proper working procedures. With the publication of the codes of Practice by The Standards and Industrial Research Institute of Malaysia (SIRIM), the users are now able to follow proper guidelines and use ionizing radiation safely and beneficially. This paper discusses the relevant sections in the following codes: 1. Code of Practice for Radiation Protection (Medical X-ray Diagnosis) MS 838:1983. 2. Code of Practice for Safety in Laboratories Part 4: Ionizing radiation MS 1042: Part 4: 1992. (author)

  7. Development and application of the waste code

    International Nuclear Information System (INIS)

    Morison, I.W.

    1984-01-01

    This paper discusses the objectives and general approach underlying the Australian Code of Practice on the Management of Radioactive Wastes arising from the Mining and Milling of Radioactive Ores 1982. Background to the development of the Code is provided and the guidelines which supplement the Code are considered

  8. Processes underlying treatment success and failure in assertive community treatment.

    Science.gov (United States)

    Stull, Laura G; McGrew, John H; Salyers, Michelle P

    2012-02-01

    Processes underlying success and failure in assertive community treatment (ACT), a widely investigated treatment model for persons with severe mental illness, are poorly understood. The purpose of the current study was to examine processes in ACT by (1) understanding how consumers and staff describe the processes underlying treatment success and failure and (2) comparing processes identified by staff and consumers. Investigators conducted semi-structured interviews with 25 staff and 23 consumers from four ACT teams. Both staff and consumers identified aspects of the ACT team itself as the most critical in the process of consumer success. For failure, consumers identified consumer characteristics as most critical and staff identified lack of social relationships. Processes underlying failure were not viewed as merely the opposite of processes underlying success. In addition, there was notable disagreement between staff and consumers on important processes. Findings overlap with critical ingredients identified in previous studies, including aspects of the ACT team, social involvement and employment. In contrast to prior studies, there was little emphasis on hospitalizations and greater emphasis on not abusing substances, obtaining wants and desires, and consumer characteristics.

  9. Stability of prebiotic, laminaran oligosaccharide under food processing conditions

    Science.gov (United States)

    Chamidah, A.

    2018-04-01

    Prebiotic stability tests on laminaran oligosaccharide under food processing conditions were urgently performed to determine the ability of prebiotics deal with processing. Laminaran, oligosaccharide is produced from enzymatic hydrolysis. To further apply this prebiotic, it is necessary to test its performance on food processing. Single prebiotic or in combination with probiotic can improve human digestive health. The effectiveness evaluation of prebiotic should be taken into account in regards its chemical and functional stabilities. This study aims to investigate the stability of laminaran, oligosaccharide under food processing condition.

  10. Coding aperture applied to X-ray imaging

    International Nuclear Information System (INIS)

    Brunol, J.; Sauneuf, R.; Gex, J.P.

    1980-05-01

    We present some X-ray images of grids and plasmas. These images were obtained by using a single circular slit (annular code) as coding aperture and a computer decoding process. The experimental resolution is better than 10μm and it is expected to be in the order of 2 or 3 μm with the same code and an improved decoding process

  11. Media audit reveals inappropriate promotion of products under the scope of the International Code of Marketing of Breast-milk Substitutes in South-East Asia.

    Science.gov (United States)

    Vinje, Kristine Hansen; Phan, Linh Thi Hong; Nguyen, Tuan Thanh; Henjum, Sigrun; Ribe, Lovise Omoijuanfo; Mathisen, Roger

    2017-06-01

    To review regulations and to perform a media audit of promotion of products under the scope of the International Code of Marketing of Breast-milk Substitutes ('the Code') in South-East Asia. We reviewed national regulations relating to the Code and 800 clips of editorial content, 387 advertisements and 217 Facebook posts from January 2015 to January 2016. We explored the ecological association between regulations and market size, and between the number of advertisements and market size and growth of milk formula. Cambodia, Indonesia, Myanmar, Thailand and Vietnam. Regulations on the child's age for inappropriate marketing of products are all below the Code's updated recommendation of 36 months (i.e. 12 months in Thailand and Indonesia; 24 months in the other three countries) and are voluntary in Thailand. Although the advertisements complied with the national regulations on the age limit, they had content (e.g. stages of milk formula; messages about the benefit; pictures of a child) that confused audiences. Market size and growth of milk formula were positively associated with the number of newborns and the number of advertisements, and were not affected by the current level of implementation of breast-milk substitute laws and regulations. The present media audit reveals inappropriate promotion and insufficient national regulation of products under the scope of the Code in South-East Asia. Strengthened implementation of regulations aligned with the Code's updated recommendation should be part of comprehensive strategies to minimize the harmful effects of advertisements of breast-milk substitutes on maternal and child nutrition and health.

  12. Verification of the network flow and transport/distributed velocity (NWFT/DVM) computer code

    International Nuclear Information System (INIS)

    Duda, L.E.

    1984-05-01

    The Network Flow and Transport/Distributed Velocity Method (NWFT/DVM) computer code was developed primarily to fulfill a need for a computationally efficient ground-water flow and contaminant transport capability for use in risk analyses where, quite frequently, large numbers of calculations are required. It is a semi-analytic, quasi-two-dimensional network code that simulates ground-water flow and the transport of dissolved species (radionuclides) in a saturated porous medium. The development of this code was carried out under a program funded by the US Nuclear Regulatory Commission (NRC) to develop a methodology for assessing the risk from disposal of radioactive wastes in deep geologic formations (FIN: A-1192 and A-1266). In support to the methodology development program, the NRC has funded a separate Maintenance of Computer Programs Project (FIN: A-1166) to ensure that the codes developed under A-1192 or A-1266 remain consistent with current operating systems, are as error-free as possible, and have up-to-date documentations for reference by the NRC staff. Part of this effort would include verification and validation tests to assure that a code correctly performs the operations specified and/or is representing the processes or system for which it is intended. This document contains four verification problems for the NWFT/DVM computer code. Two of these problems are analytical verifications of NWFT/DVM where results are compared to analytical solutions. The other two are code-to-code verifications where results from NWFT/DVM are compared to those of another computer code. In all cases NWFT/DVM showed good agreement with both the analytical solutions and the results from the other code

  13. Optimization of Wireless Transceivers under Processing Energy Constraints

    Science.gov (United States)

    Wang, Gaojian; Ascheid, Gerd; Wang, Yanlu; Hanay, Oner; Negra, Renato; Herrmann, Matthias; Wehn, Norbert

    2017-09-01

    Focus of the article is on achieving maximum data rates under a processing energy constraint. For a given amount of processing energy per information bit, the overall power consumption increases with the data rate. When targeting data rates beyond 100 Gb/s, the system's overall power consumption soon exceeds the power which can be dissipated without forced cooling. To achieve a maximum data rate under this power constraint, the processing energy per information bit must be minimized. Therefore, in this article, suitable processing efficient transmission schemes together with energy efficient architectures and their implementations are investigated in a true cross-layer approach. Target use cases are short range wireless transmitters working at carrier frequencies around 60 GHz and bandwidths between 1 GHz and 10 GHz.

  14. Acquisition of Inductive Biconditional Reasoning Skills: Training of Simultaneous and Sequential Processing.

    Science.gov (United States)

    Lee, Seong-Soo

    1982-01-01

    Tenth-grade students (n=144) received training on one of three processing methods: coding-mapping (simultaneous), coding only, or decision tree (sequential). The induced simultaneous processing strategy worked optimally under rule learning, while the sequential strategy was difficult to induce and/or not optimal for rule-learning operations.…

  15. Thermomechanical DART code improvements for LEU VHD dispersion and monolithic fuel element analysis

    International Nuclear Information System (INIS)

    Taboada, H.; Saliba, R.; Moscarda, M.V.; Rest, J.

    2005-01-01

    A collaboration agreement between ANL/US DOE and CNEA Argentina in the area of Low Enriched Uranium Advanced Fuels has been in place since October 16, 1997 under the Implementation Arrangement for Technical Exchange and Cooperation in the Area of Peaceful Uses of Nuclear Energy. An annex concerning DART code optimization has been operative since February 8, 1999. Previously, as a part of this annex a visual FASTDART version and also a DART THERMAL version were presented during RERTR 2000, 2002 and RERTR 2003 Meetings. During this past year the following activities were completed: Optimization of DART TM code Al diffusion parameters by testing predictions against reliable data from RERTR experiments. Improvements on the 3-D thermo-mechanical version of the code for modeling the irradiation behavior of LEU U-Mo monolithic fuel. Concerning the first point, by means of an optimization of parameters of the Al diffusion through the interaction product theoretical expression, a reasonable agreement between DART temperature calculations with reliable RERTR PIE data was reached. The 3-D thermomechanical code complex is based upon a finite element thermal-elastic code named TERMELAS, and irradiation behavior provided by the DART code. An adequate and progressive process of coupling calculations of both codes at each time step is currently developed. Compatible thermal calculation between both codes was reached. This is the first stage to benchmark and validate against RERTR PIE data the coupling process. (author)

  16. The effects of divided attention on encoding processes under incidental and intentional learning instructions: underlying mechanisms?

    Science.gov (United States)

    Naveh-Benjamin, Moshe; Guez, Jonathan; Hara, Yoko; Brubaker, Matthew S; Lowenschuss-Erlich, Iris

    2014-01-01

    Divided attention (DA) at encoding has been shown to significantly disrupt later memory for the studied information. However, what type of processing gets disrupted during DA remains unresolved. In this study, we assessed the degree to which strategic effortful processes are affected under DA by comparing the effects of DA at encoding under intentional and pure incidental learning instructions. In three experiments, participants studied list of words or word pairs under either full or divided attention. Results of three experiments, which used different methodologies, converged to show that the effects of DA at encoding reduce memory performance to the same degree under incidental and intentional learning. Secondary task performance indicated that encoding under intentional learning instructions was more effortful than under incidental learning instructions. In addition, the results indicated enhanced attention to the initial appearance of the words under both types of learning instructions. Results are interpreted to imply that other processes, rather than only strategic effortful ones, might be affected by DA at encoding.

  17. A phase transition in the first passage of a Brownian process through a fluctuating boundary with implications for neural coding.

    Science.gov (United States)

    Taillefumier, Thibaud; Magnasco, Marcelo O

    2013-04-16

    Finding the first time a fluctuating quantity reaches a given boundary is a deceptively simple-looking problem of vast practical importance in physics, biology, chemistry, neuroscience, economics, and industrial engineering. Problems in which the bound to be traversed is itself a fluctuating function of time include widely studied problems in neural coding, such as neuronal integrators with irregular inputs and internal noise. We show that the probability p(t) that a Gauss-Markov process will first exceed the boundary at time t suffers a phase transition as a function of the roughness of the boundary, as measured by its Hölder exponent H. The critical value occurs when the roughness of the boundary equals the roughness of the process, so for diffusive processes the critical value is Hc = 1/2. For smoother boundaries, H > 1/2, the probability density is a continuous function of time. For rougher boundaries, H probability is concentrated on a Cantor-like set of zero measure: the probability density becomes divergent, almost everywhere either zero or infinity. The critical point Hc = 1/2 corresponds to a widely studied case in the theory of neural coding, in which the external input integrated by a model neuron is a white-noise process, as in the case of uncorrelated but precisely balanced excitatory and inhibitory inputs. We argue that this transition corresponds to a sharp boundary between rate codes, in which the neural firing probability varies smoothly, and temporal codes, in which the neuron fires at sharply defined times regardless of the intensity of internal noise.

  18. NSURE code

    International Nuclear Information System (INIS)

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  19. RETRANS - A tool to verify the functional equivalence of automatically generated source code with its specification

    International Nuclear Information System (INIS)

    Miedl, H.

    1998-01-01

    Following the competent technical standards (e.g. IEC 880) it is necessary to verify each step in the development process of safety critical software. This holds also for the verification of automatically generated source code. To avoid human errors during this verification step and to limit the cost effort a tool should be used which is developed independently from the development of the code generator. For this purpose ISTec has developed the tool RETRANS which demonstrates the functional equivalence of automatically generated source code with its underlying specification. (author)

  20. PEAR code review

    International Nuclear Information System (INIS)

    De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.

    1997-07-01

    As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)

  1. Radioactive action code

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    A new coding system, 'Hazrad', for buildings and transportation containers for alerting emergency services personnel to the presence of radioactive materials has been developed in the United Kingdom. The hazards of materials in the buildings or transport container, together with the recommended emergency action, are represented by a number of codes which are marked on the building or container and interpreted from a chart carried as a pocket-size guide. Buildings would be marked with the familiar yellow 'radioactive' trefoil, the written information 'Radioactive materials' and a list of isotopes. Under this the 'Hazrad' code would be written - three symbols to denote the relative radioactive risk (low, medium or high), the biological risk (also low, medium or high) and the third showing the type of radiation emitted, alpha, beta or gamma. The response cards indicate appropriate measures to take, eg for a high biological risk, Bio3, the wearing of a gas-tight protection suit is advised. The code and its uses are explained. (U.K.)

  2. Advanced Imaging Optics Utilizing Wavefront Coding.

    Energy Technology Data Exchange (ETDEWEB)

    Scrymgeour, David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boye, Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Adelsberger, Kathleen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-01

    Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise. Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.

  3. CT dosimetry computer codes: Their influence on radiation dose estimates and the necessity for their revision under new ICRP radiation protection standards

    International Nuclear Information System (INIS)

    Kim, K. P.; Lee, J.; Bolch, W. E.

    2011-01-01

    Computed tomography (CT) dosimetry computer codes have been most commonly used due to their user friendliness, but with little consideration for potential uncertainty in estimated organ dose and their underlying limitations. Generally, radiation doses calculated with different CT dosimetry computer codes were comparable, although relatively large differences were observed for some specific organs or tissues. The largest difference in radiation doses calculated using different computer codes was observed for Siemens Sensation CT scanners. Radiation doses varied with patient age and sex. Younger patients and adult females receive a higher radiation dose in general than adult males for the same CT technique factors. There are a number of limitations of current CT dosimetry computer codes. These include unrealistic modelling of the human anatomy, a limited number of organs and tissues for dose calculation, inability to alter patient height and weight, and non-applicability to new CT technologies. Therefore, further studies are needed to overcome these limitations and to improve CT dosimetry. (authors)

  4. User Effect on Code Application and Qualification Needs

    International Nuclear Information System (INIS)

    D'Auria, F.; Salah, A.B.

    2008-01-01

    Experience with some code assessment case studies and also additional ISPs have shown the dominant effect of the code user on the predicted system behavior. The general findings of the user effect investigations on some of the case studies indicate, specifically, that in addition to user effects, there are other reasons which affect the results of the calculations and are hidden under the general title of user effects. The specific characteristics of experimental facilities, i.e. limitations as far as code assessment is concerned; limitations of the used thermal-hydraulic codes to simulate certain system behavior or phenomena; limitations due to interpretation of experimental data by the code user, i.e. interpretation of experimental data base. On the basis of the discussions in this paper, the following conclusions and recommendations can be made: More dialogue appears to be necessary with the experimenters in the planning of code assessment calculations, e.g. ISPs.; User guidelines are not complete for the codes and the lack of sufficient and detailed user guidelines are observed with some of the case studies; More extensive user instruction and training, improved user guidelines, or quality assurance procedures may partially reduce some of the subjective user influence on the calculated results; The discrepancies between experimental data and code predictions are due both to the intrinsic code limit and to the so called user effects. There is a worthful need to quantify the percentage of disagreement due to the poor utilization of the code and due to the code itself. This need especially arises for the uncertainty evaluation studies (e.g. [18]) which do not take into account the mentioned user effects; A much focused investigation, based on the results of comparison calculations e.g. ISPs, analyzing the experimental data and the results of the specific code in order to evaluate the user effects and the related experimental aspects should be integral part of the

  5. Review of design codes of concrete encased steel short columns under axial compression

    Directory of Open Access Journals (Sweden)

    K.Z. Soliman

    2013-08-01

    Full Text Available In recent years, the use of encased steel concrete columns has been increased significantly in medium-rise or high-rise buildings. The aim of the present investigation is to assess experimentally the current methods and codes for evaluating the ultimate load behavior of concrete encased steel short columns. The current state of design provisions for composite columns from the Egyptian codes ECP203-2007 and ECP-SC-LRFD-2012, as well as, American Institute of Steel Construction, AISC-LRFD-2010, American Concrete Institute, ACI-318-2008, and British Standard BS-5400-5 was reviewed. The axial capacity portion of both the encased steel section and the concrete section was also studied according to the previously mentioned codes. Ten encased steel concrete columns have been investigated experimentally to study the effect of concrete confinement and different types of encased steel sections. The measured axial capacity of the tested ten composite columns was compared with the values calculated by the above mentioned codes. It is concluded that non-negligible discrepancies exist between codes and the experimental results as the confinement effect was not considered in predicting both the strength and ductility of concrete. The confining effect was obviously influenced by the shape of the encased steel section. The tube-shaped steel section leads to better confinement than the SIB section. Among the used codes, the ECP-SC-LRFD-2012 led to the most conservative results.

  6. Improvement of the computing speed of the FBR fuel pin bundle deformation analysis code 'BAMBOO'

    International Nuclear Information System (INIS)

    Ito, Masahiro; Uwaba, Tomoyuki

    2005-04-01

    JNC has developed a coupled analysis system of a fuel pin bundle deformation analysis code 'BAMBOO' and a thermal hydraulics analysis code ASFRE-IV' for the purpose of evaluating the integrity of a subassembly under the BDI condition. This coupled analysis took much computation time because it needs convergent calculations to obtain numerically stationary solutions for thermal and mechanical behaviors. We improved the computation time of the BAMBOO code analysis to make the coupled analysis practicable. 'BAMBOO' is a FEM code and as such its matrix calculations consume large memory area to temporarily stores intermediate results in the solution of simultaneous linear equations. The code used the Hard Disk Drive (HDD) for the virtual memory area to save Random Access Memory (RAM) of the computer. However, the use of the HDD increased the computation time because Input/Output (I/O) processing with the HDD took much time in data accesses. We improved the code in order that it could conduct I/O processing only with the RAM in matrix calculations and run with in high-performance computers. This improvement considerably increased the CPU occupation rate during the simulation and reduced the total simulation time of the BAMBOO code to about one-seventh of that before the improvement. (author)

  7. High dynamic range coding imaging system

    Science.gov (United States)

    Wu, Renfan; Huang, Yifan; Hou, Guangqi

    2014-10-01

    We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.

  8. Convergence of macrostates under reproducible processes

    International Nuclear Information System (INIS)

    Rau, Jochen

    2010-01-01

    I show that whenever a system undergoes a reproducible macroscopic process the mutual distinguishability of macrostates, as measured by their relative entropy, diminishes. This extends the second law which regards only ordinary entropies, and hence only the distinguishability between macrostates and one specific reference state (equidistribution). The new result holds regardless of whether the process is linear or nonlinear. Its proof hinges on the monotonicity of quantum relative entropy under arbitrary coarse grainings, even those that cannot be represented by trace-preserving completely positive maps.

  9. Code portability and data management considerations in the SAS3D LMFBR accident-analysis code

    International Nuclear Information System (INIS)

    Dunn, F.E.

    1981-01-01

    The SAS3D code was produced from a predecessor in order to reduce or eliminate interrelated problems in the areas of code portability, the large size of the code, inflexibility in the use of memory and the size of cases that can be run, code maintenance, and running speed. Many conventional solutions, such as variable dimensioning, disk storage, virtual memory, and existing code-maintenance utilities were not feasible or did not help in this case. A new data management scheme was developed, coding standards and procedures were adopted, special machine-dependent routines were written, and a portable source code processing code was written. The resulting code is quite portable, quite flexible in the use of memory and the size of cases that can be run, much easier to maintain, and faster running. SAS3D is still a large, long running code that only runs well if sufficient main memory is available

  10. A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing.

    Directory of Open Access Journals (Sweden)

    Haruo Hosoya

    2017-07-01

    Full Text Available Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009. These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance, and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.

  11. TASS/SMR Code Topical Report for SMART Plant, Vol. I: Code Structure, System Models, and Solution Methods

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Young Jong; Kim, Soo Hyoung; Kim, See Darl (and others)

    2008-10-15

    The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained.

  12. u-Constacyclic codes over F_p+u{F}_p and their applications of constructing new non-binary quantum codes

    Science.gov (United States)

    Gao, Jian; Wang, Yongkang

    2018-01-01

    Structural properties of u-constacyclic codes over the ring F_p+u{F}_p are given, where p is an odd prime and u^2=1. Under a special Gray map from F_p+u{F}_p to F_p^2, some new non-binary quantum codes are obtained by this class of constacyclic codes.

  13. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  14. Unfolding code for neutron spectrometry based on neural nets technology

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Vega C, H. R.

    2012-10-01

    The most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Neural Networks have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This unfolding code called Neutron Spectrometry and Dosimetry by means of Artificial Neural Networks was designed in a graphical interface under LabVIEW programming environment. The core of the code is an embedded neural network architecture, previously optimized by the R obust Design of Artificial Neural Networks Methodology . The main features of the code are: is easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a 6 Lil(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, only seven rate counts measurement with a Bonner spheres spectrometer are required for simultaneously unfold the 60 energy bins of the neutron spectrum and to calculate 15 dosimetric quantities, for radiation protection porpoises. This code generates a full report in html format with all relevant information. (Author)

  15. ITER Dynamic Tritium Inventory Modeling Code

    International Nuclear Information System (INIS)

    Cristescu, Ioana-R.; Doerr, L.; Busigin, A.; Murdoch, D.

    2005-01-01

    A tool for tritium inventory evaluation within each sub-system of the Fuel Cycle of ITER is vital, with respect to both the process of licensing ITER and also for operation. It is very likely that measurements of total tritium inventories may not be possible for all sub-systems, however tritium accounting may be achieved by modeling its hold-up within each sub-system and by validating these models in real-time against the monitored flows and tritium streams between the systems. To get reliable results, an accurate dynamic modeling of the tritium content in each sub-system is necessary. In order to optimize the configuration and operation of the ITER fuel cycle, a dynamic fuel cycle model was developed progressively in the decade up to 2000-2001. As the design for some sub-systems from the fuel cycle (i.e. Vacuum pumping, Neutral Beam Injectors (NBI)) have substantially progressed meanwhile, a new code developed under a different platform to incorporate these modifications has been developed. The new code is taking over the models and algorithms for some subsystems, such as Isotope Separation System (ISS); where simplified models have been previously considered, more detailed have been introduced, as for the Water Detritiation System (WDS). To reflect all these changes, the new code developed inside EU participating team was nominated TRIMO (Tritium Inventory Modeling), to emphasize the use of the code on assessing the tritium inventory within ITER

  16. Neutron cross section library production code system for continuous energy Monte Carlo code MVP. LICEM

    International Nuclear Information System (INIS)

    Mori, Takamasa; Nakagawa, Masayuki; Kaneko, Kunio.

    1996-05-01

    A code system has been developed to produce neutron cross section libraries for the MVP continuous energy Monte Carlo code from an evaluated nuclear data library in the ENDF format. The code system consists of 9 computer codes, and can process nuclear data in the latest ENDF-6 format. By using the present system, MVP neutron cross section libraries for important nuclides in reactor core analyses, shielding and fusion neutronics calculations have been prepared from JENDL-3.1, JENDL-3.2, JENDL-FUSION file and ENDF/B-VI data bases. This report describes the format of MVP neutron cross section library, the details of each code in the code system and how to use them. (author)

  17. Neutron cross section library production code system for continuous energy Monte Carlo code MVP. LICEM

    Energy Technology Data Exchange (ETDEWEB)

    Mori, Takamasa; Nakagawa, Masayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kaneko, Kunio

    1996-05-01

    A code system has been developed to produce neutron cross section libraries for the MVP continuous energy Monte Carlo code from an evaluated nuclear data library in the ENDF format. The code system consists of 9 computer codes, and can process nuclear data in the latest ENDF-6 format. By using the present system, MVP neutron cross section libraries for important nuclides in reactor core analyses, shielding and fusion neutronics calculations have been prepared from JENDL-3.1, JENDL-3.2, JENDL-FUSION file and ENDF/B-VI data bases. This report describes the format of MVP neutron cross section library, the details of each code in the code system and how to use them. (author).

  18. INTRANS. A computer code for the non-linear structural response analysis of reactor internals under transient loads

    International Nuclear Information System (INIS)

    Ramani, D.T.

    1977-01-01

    The 'INTRANS' system is a general purpose computer code, designed to perform linear and non-linear structural stress and deflection analysis of impacting or non-impacting nuclear reactor internals components coupled with reactor vessel, shield building and external as well as internal gapped spring support system. This paper describes in general a unique computational procedure for evaluating the dynamic response of reactor internals, descretised as beam and lumped mass structural system and subjected to external transient loads such as seismic and LOCA time-history forces. The computational procedure is outlined in the INTRANS code, which computes component flexibilities of a discrete lumped mass planar model of reactor internals by idealising an assemblage of finite elements consisting of linear elastic beams with bending, torsional and shear stiffnesses interacted with external or internal linear as well as non-linear multi-gapped spring support system. The method of analysis is based on the displacement method and the code uses the fourth-order Runge-Kutta numerical integration technique as a basis for solution of dynamic equilibrium equations of motion for the system. During the computing process, the dynamic response of each lumped mass is calculated at specific instant of time using well-known step-by-step procedure. At any instant of time then, the transient dynamic motions of the system are held stationary and based on the predicted motions and internal forces of the previous instant. From which complete response at any time-step of interest may then be computed. Using this iterative process, the relationship between motions and internal forces is satisfied step by step throughout the time interval

  19. Dress codes and appearance policies: challenges under federal legislation, part 3: Title VII, the Americans with Disabilities Act, and the National Labor Relations Act.

    Science.gov (United States)

    Mitchell, Michael S; Koen, Clifford M; Darden, Stephen M

    2014-01-01

    As more and more individuals express themselves with tattoos and body piercings and push the envelope on what is deemed appropriate in the workplace, employers have an increased need for creation and enforcement of reasonable dress codes and appearance policies. As with any employment policy or practice, an appearance policy must be implemented and enforced without regard to an individual's race, color, sex, national origin, religion, disability, age, or any other protected status. A policy governing dress and appearance based on the business needs of an employer that is applied fairly and consistently and does not have a disproportionate effect on any protected class will generally be upheld if challenged in court. By examining some of the more common legal challenges to dress codes and how courts have resolved the disputes, health care managers can avoid many potential problems. This article, the third part of a 3-part examination of dress codes and appearance policies, focuses on the issues of race and national origin under the Civil Rights Act, disability under the Americans With Disabilities Act, and employees' rights to engage in concerted activities under the National Labor Relations Act. Pertinent court cases that provide guidance for employers are addressed.

  20. Leadership Class Configuration Interaction Code - Status and Opportunities

    Science.gov (United States)

    Vary, James

    2011-10-01

    With support from SciDAC-UNEDF (www.unedf.org) nuclear theorists have developed and are continuously improving a Leadership Class Configuration Interaction Code (LCCI) for forefront nuclear structure calculations. The aim of this project is to make state-of-the-art nuclear structure tools available to the entire community of researchers including graduate students. The project includes codes such as NuShellX, MFDn and BIGSTICK that run a range of computers from laptops to leadership class supercomputers. Codes, scripts, test cases and documentation have been assembled, are under continuous development and are scheduled for release to the entire research community in November 2011. A covering script that accesses the appropriate code and supporting files is under development. In addition, a Data Base Management System (DBMS) that records key information from large production runs and archived results of those runs has been developed (http://nuclear.physics.iastate.edu/info/) and will be released. Following an outline of the project, the code structure, capabilities, the DBMS and current efforts, I will suggest a path forward that would benefit greatly from a significant partnership between researchers who use the codes, code developers and the National Nuclear Data efforts. This research is supported in part by DOE under grant DE-FG02-87ER40371 and grant DE-FC02-09ER41582 (SciDAC-UNEDF).

  1. Study of probing beam enlargement due to forward-scattering under low wavenumber turbulence using a FDTD full-wave code

    Energy Technology Data Exchange (ETDEWEB)

    Silva, F. da [Associao EURATOM/IST, IPFN-LA, Instituto Superor Tecnico, Lisbon (Portugal); Heuraux, S. [Institut Jean Lamour, CNRS-Nancy-Universite, BP70239, Vandoeuvre-les-Nancy (France); Gusakov, E.; Popov, A. [Ioffe Institute, Polytekhnicheskaya, St Petersburg (Russian Federation)

    2011-07-01

    Forward-scattering under high level of turbulence or long propagation paths can induce significant effects, as predicted by theory, and impose a signature on the Doppler reflectometry response. Simulations using a FDTD (finite-difference time-domain) full-wave code have confirmed the main dependencies and general behavior described by theory but display a returned RMS power, at moderate amplitudes, half of the one predicted by theory due to the impossibility to reach the numerical requirements needed to describe the small wavenumber spectrum with the wanted accuracy.One justifying factor may be due to the splitting and enlargement of the probing beam. At high turbulence levels, the scattered power returning to the antenna is higher than the predicted by the theory probably due to the scattered zone being closer than the oblique cutoff. This loss of coherence of the wavefront induces a beam spreading, which is also responsible for a diminution of the wavenumber resolution. With a FDTD full-wave code we study the behavior of the probing beam under several amplitude levels of low wavenumber plasma turbulence, using long temporal simulations series to ensure statistical accuracy. (authors)

  2. Application of the MELCOR code to design basis PWR large dry containment analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, Jesse; Notafrancesco, Allen (USNRC, Office of Nuclear Regulatory Research, Rockville, MD); Tills, Jack Lee (Jack Tills & Associates, Inc., Sandia Park, NM)

    2009-05-01

    The MELCOR computer code has been developed by Sandia National Laboratories under USNRC sponsorship to provide capability for independently auditing analyses submitted by reactor manufactures and utilities. MELCOR is a fully integrated code (encompassing the reactor coolant system and the containment building) that models the progression of postulated accidents in light water reactor power plants. To assess the adequacy of containment thermal-hydraulic modeling incorporated in the MELCOR code for application to PWR large dry containments, several selected demonstration designs were analyzed. This report documents MELCOR code demonstration calculations performed for postulated design basis accident (DBA) analysis (LOCA and MSLB) inside containment, which are compared to other code results. The key processes when analyzing the containment loads inside PWR large dry containments are (1) expansion and transport of high mass/energy releases, (2) heat and mass transfer to structural passive heat sinks, and (3) containment pressure reduction due to engineered safety features. A code-to-code benchmarking for DBA events showed that MELCOR predictions of maximum containment loads were equivalent to similar predictions using a qualified containment code known as CONTAIN. This equivalency was found to apply for both single- and multi-cell containment models.

  3. A molecular dynamics simulation code ISIS

    International Nuclear Information System (INIS)

    Kambayashi, Shaw

    1992-06-01

    Computer simulation based on the molecular dynamics (MD) method has become an important tool complementary to experiments and theoretical calculations in a wide range of scientific fields such as physics, chemistry, biology, and so on. In the MD method, the Newtonian equations-of-motion of classical particles are integrated numerically to reproduce a phase-space trajectory of the system. In the 1980's, several new techniques have been developed for simulation at constant-temperature and/or constant-pressure in convenient to compare result of computer simulation with experimental results. We first summarize the MD method for both microcanonical and canonical simulations. Then, we present and overview of a newly developed ISIS (Isokinetic Simulation of Soft-spheres) code and its performance on various computers including vector processors. The ISIS code has a capability to make a MD simulation under constant-temperature condition by using the isokinetic constraint method. The equations-of-motion is integrated by a very accurate fifth-order finite differential algorithm. The bookkeeping method is also utilized to reduce the computational time. Furthermore, the ISIS code is well adopted for vector processing: Speedup ratio ranged from 16 to 24 times is obtained on a VP2600/10 vector processor. (author)

  4. Development of the GUI environments of MIDAS code for convenient input and output processing

    International Nuclear Information System (INIS)

    Kim, K. L.; Kim, D. H.

    2003-01-01

    MIDAS is being developed at KAERI as an integrated Severe Accident Analysis Code with easy model modification and addition by restructuring the data transfer scheme. In this paper, the input file management system, IEDIT and graphic simulation system, SATS, are presented as MIDAS input and output GUI systems. These two systems would form the basis of the MIDAS GUI system for input and output processing, and they are expected to be useful tools for severe accidents analysis and simulation

  5. Code Package to Analyze Parameters of the WWER Fuel Rod. TOPRA-2 Code - Verification Data

    International Nuclear Information System (INIS)

    Scheglov, A.; Proselkov, V.; Passage, G.; Stefanova, S.

    2009-01-01

    Presented are the data for computer codes to analyze WWER fuel rods, used in the WWER department of RRC 'Kurchatov Institute'. Presented is the description of TOPRA-2 code intended for the engineering analysis of thermophysical and strength parameters of the WWER fuel rod - temperature distributions along the fuel radius, gas pressures under the cladding, stresses in the cladding, etc. for the reactor operation in normal conditions. Presented are some results of the code verification against test problems and the data obtained in the experimental programs. Presented are comparison results of the calculations with TOPRA-2 and TRANSURANUS (V1M1J06) codes. Results obtained in the course of verification demonstrate possibility of application of the methodology and TOPRA-2 code for the engineering analysis of the WWER fuel rods

  6. NAGRADATA. Code key. Geology

    International Nuclear Information System (INIS)

    Mueller, W.H.; Schneider, B.; Staeuble, J.

    1984-01-01

    This reference manual provides users of the NAGRADATA system with comprehensive keys to the coding/decoding of geological and technical information to be stored in or retreaved from the databank. Emphasis has been placed on input data coding. When data is retreaved the translation into plain language of stored coded information is done automatically by computer. Three keys each, list the complete set of currently defined codes for the NAGRADATA system, namely codes with appropriate definitions, arranged: 1. according to subject matter (thematically) 2. the codes listed alphabetically and 3. the definitions listed alphabetically. Additional explanation is provided for the proper application of the codes and the logic behind the creation of new codes to be used within the NAGRADATA system. NAGRADATA makes use of codes instead of plain language for data storage; this offers the following advantages: speed of data processing, mainly data retrieval, economies of storage memory requirements, the standardisation of terminology. The nature of this thesaurian type 'key to codes' makes it impossible to either establish a final form or to cover the entire spectrum of requirements. Therefore, this first issue of codes to NAGRADATA must be considered to represent the current state of progress of a living system and future editions will be issued in a loose leave ringbook system which can be updated by an organised (updating) service. (author)

  7. THE APPEAL CALLED “EMBARGOS DE DECLARAÇÃO” IN ELECTORAL PROCESS: A BRIEF VIEW AFTER THE BRAZILIAN NEW CIVIL PROCEDURE CODE

    Directory of Open Access Journals (Sweden)

    Rodrigo Mazzei

    2016-12-01

    Full Text Available This paper analyzes the regulation of embargos de declaração with in the electoral process, as well as the interpretation that has been given by the Courts. Addressing essencial issues of embargos de declaração, as the deadline, legal nature, suitability hypothesis and suspensive effect, many of which are the subject of discussion in doctrine and jurisprudence, mainly due to diversification and variety of rules dealing with the subject (Electoral Code, Regiments Internal of Courts and Civil Procedure Code and Criminal Procedure Code - alternatively applied, besides the need for a constitutional interpretation focused on the embargos de declaração. Observes the proposals of the Project of the New CPC, pending in the legislative, for the regulation of embargos de declaração and the impacts that this new text will bring to the electoral process, pointing out possible ways to conciliation between the “new” civil process and the electoral law.

  8. Applications of Coding in Network Communications

    Science.gov (United States)

    Chang, Christopher SungWook

    2012-01-01

    This thesis uses the tool of network coding to investigate fast peer-to-peer file distribution, anonymous communication, robust network construction under uncertainty, and prioritized transmission. In a peer-to-peer file distribution system, we use a linear optimization approach to show that the network coding framework significantly simplifies…

  9. Computer codes for neutron data evaluation

    International Nuclear Information System (INIS)

    Nakagawa, Tsuneo

    1979-01-01

    Data compilation codes such as NESTOR and REPSTOR, and NDES (Neutron Data Evaluation System) are mainly discussed. NDES is a code for neutron data evaluation using a TSS terminal, TEKTRONIX 4014. Users of NDES can perform plotting of data and calculation with nuclear models under conversational mode. (author)

  10. The EUCLID/V1 Integrated Code for Safety Assessment of Liquid Metal Cooled Fast Reactors. Part 1: Basic Models

    Science.gov (United States)

    Mosunova, N. A.

    2018-05-01

    The article describes the basic models included in the EUCLID/V1 integrated code intended for safety analysis of liquid metal (sodium, lead, and lead-bismuth) cooled fast reactors using fuel rods with a gas gap and pellet dioxide, mixed oxide or nitride uranium-plutonium fuel under normal operation, under anticipated operational occurrences and accident conditions by carrying out interconnected thermal-hydraulic, neutronics, and thermal-mechanical calculations. Information about the Russian and foreign analogs of the EUCLID/V1 integrated code is given. Modeled objects, equation systems in differential form solved in each module of the EUCLID/V1 integrated code (the thermal-hydraulic, neutronics, fuel rod analysis module, and the burnup and decay heat calculation modules), the main calculated quantities, and also the limitations on application of the code are presented. The article also gives data on the scope of functions performed by the integrated code's thermal-hydraulic module, using which it is possible to describe both one- and twophase processes occurring in the coolant. It is shown that, owing to the availability of the fuel rod analysis module in the integrated code, it becomes possible to estimate the performance of fuel rods in different regimes of the reactor operation. It is also shown that the models implemented in the code for calculating neutron-physical processes make it possible to take into account the neutron field distribution over the fuel assembly cross section as well as other features important for the safety assessment of fast reactors.

  11. TRUPACT-II Content Codes (TRUCON), Revision 8 and list of chemicals and materials in TRUCON (chemical list), Revision 7

    International Nuclear Information System (INIS)

    1996-03-01

    The Transuranic Package Transporter (TRUPACT-II) Content Codes document (TRUCON) represents the development of a new content code system for shipping contact handled transuranic (CH-TRU) waste in TRUPACT-II. It will be used to convert existing waste forms, content codes, and any other identification codes into a system that is uniform throughout for all the Department of Energy (DOE) sites. These various codes can be grouped under the newly formed shipping content codes without any loss of waste characterization information. The TRUCON document provides a parametric description for each content code for waste generated and compiles this information for all ten DOE sites. Compliance with waste generation, processing and certification procedures at the sites (outlined in the TRUCON document for each content code) ensures that prohibited waste forms are not present in the waste. The content code essentially gives a description of the CH-TRU waste material in terms of processes and packaging, and the generation location. This helps to provide cradle-to-grave traceability of the waste material so that the various actions required to assess its qualification as payload for the TRUPACT-II package can be performed

  12. Coding for effective denial management.

    Science.gov (United States)

    Miller, Jackie; Lineberry, Joe

    2004-01-01

    Nearly everyone will agree that accurate and consistent coding of diagnoses and procedures is the cornerstone for operating a compliant practice. The CPT or HCPCS procedure code tells the payor what service was performed and also (in most cases) determines the amount of payment. The ICD-9-CM diagnosis code, on the other hand, tells the payor why the service was performed. If the diagnosis code does not meet the payor's criteria for medical necessity, all payment for the service will be denied. Implementation of an effective denial management program can help "stop the bleeding." Denial management is a comprehensive process that works in two ways. First, it evaluates the cause of denials and takes steps to prevent them. Second, denial management creates specific procedures for refiling or appealing claims that are initially denied. Accurate, consistent and compliant coding is key to both of these functions. The process of proactively managing claim denials also reveals a practice's administrative strengths and weaknesses, enabling radiology business managers to streamline processes, eliminate duplicated efforts and shift a larger proportion of the staff's focus from paperwork to servicing patients--all of which are sure to enhance operations and improve practice management and office morale. Accurate coding requires a program of ongoing training and education in both CPT and ICD-9-CM coding. Radiology business managers must make education a top priority for their coding staff. Front office staff, technologists and radiologists should also be familiar with the types of information needed for accurate coding. A good staff training program will also cover the proper use of Advance Beneficiary Notices (ABNs). Registration and coding staff should understand how to determine whether the patient's clinical history meets criteria for Medicare coverage, and how to administer an ABN if the exam is likely to be denied. Staff should also understand the restrictions on use of

  13. Verification of reactor safety codes

    International Nuclear Information System (INIS)

    Murley, T.E.

    1978-01-01

    The safety evaluation of nuclear power plants requires the investigation of wide range of potential accidents that could be postulated to occur. Many of these accidents deal with phenomena that are outside the range of normal engineering experience. Because of the expense and difficulty of full scale tests covering the complete range of accident conditions, it is necessary to rely on complex computer codes to assess these accidents. The central role that computer codes play in safety analyses requires that the codes be verified, or tested, by comparing the code predictions with a wide range of experimental data chosen to span the physical phenomena expected under potential accident conditions. This paper discusses the plans of the Nuclear Regulatory Commission for verifying the reactor safety codes being developed by NRC to assess the safety of light water reactors and fast breeder reactors. (author)

  14. Differentiation of ileostomy from colostomy procedures: assessing the accuracy of current procedural terminology codes and the utility of natural language processing.

    Science.gov (United States)

    Vo, Elaine; Davila, Jessica A; Hou, Jason; Hodge, Krystle; Li, Linda T; Suliburk, James W; Kao, Lillian S; Berger, David H; Liang, Mike K

    2013-08-01

    Large databases provide a wealth of information for researchers, but identifying patient cohorts often relies on the use of current procedural terminology (CPT) codes. In particular, studies of stoma surgery have been limited by the accuracy of CPT codes in identifying and differentiating ileostomy procedures from colostomy procedures. It is important to make this distinction because the prevalence of complications associated with stoma formation and reversal differ dramatically between types of stoma. Natural language processing (NLP) is a process that allows text-based searching. The Automated Retrieval Console is an NLP-based software that allows investigators to design and perform NLP-assisted document classification. In this study, we evaluated the role of CPT codes and NLP in differentiating ileostomy from colostomy procedures. Using CPT codes, we conducted a retrospective study that identified all patients undergoing a stoma-related procedure at a single institution between January 2005 and December 2011. All operative reports during this time were reviewed manually to abstract the following variables: formation or reversal and ileostomy or colostomy. Sensitivity and specificity for validation of the CPT codes against the mastery surgery schedule were calculated. Operative reports were evaluated by use of NLP to differentiate ileostomy- from colostomy-related procedures. Sensitivity and specificity for identifying patients with ileostomy or colostomy procedures were calculated for CPT codes and NLP for the entire cohort. CPT codes performed well in identifying stoma procedures (sensitivity 87.4%, specificity 97.5%). A total of 664 stoma procedures were identified by CPT codes between 2005 and 2011. The CPT codes were adequate in identifying stoma formation (sensitivity 97.7%, specificity 72.4%) and stoma reversal (sensitivity 74.1%, specificity 98.7%), but they were inadequate in identifying ileostomy (sensitivity 35.0%, specificity 88.1%) and colostomy (75

  15. UNSAT-H, an unsaturated soil water flow code for use at the Hanford site: code documentation

    International Nuclear Information System (INIS)

    Fayer, M.J.; Gee, G.W.

    1985-10-01

    The unsaturated soil moisture flow code, UNSAT-H, which was developed at Pacific Northwest Laboratory for assessing water movement at waste sites on the Hanford site, is documented in this report. This code is used in simulating the water dynamics of arid sites under consideration for waste disposal. The results of an example simulation of constant infiltration show excellent agreement with an analytical solution and another numerical solution, thus providing some verification of the UNSAT-H code. Areas of the code are identified for future work and include runoff, snowmelt, long-term climate and plant models, and parameter measurement. 29 refs., 7 figs., 2 tabs

  16. Recent activities in accelerator code development

    International Nuclear Information System (INIS)

    Copper, R.K.; Ryne, R.D.

    1992-01-01

    In this paper we will review recent activities in the area of code development as it affects the accelerator community. We will first discuss the changing computing environment. We will review how the computing environment has changed in the last 10 years, with emphasis on computing power, operating systems, computer languages, graphics standards, and massively parallel processing. Then we will discuss recent code development activities in the areas of electromagnetics codes and beam dynamics codes

  17. Development of GRIF-SM: The code for analysis of beyond design basis accidents in sodium cooled reactors

    International Nuclear Information System (INIS)

    Chvetsov, I.; Kouznetsov, I.; Volkov, A.

    2000-01-01

    GRIF-SM code was developed at the IPPE fast reactor department in 1992 for the analysis of transients in sodium cooled fast reactors under severe accident conditions. This code provides solution of transient hydrodynamics and heat transfer equations taking into account possibility of coolant boiling, fuel and steel melting, reactor kinetics and reactivity feedback due to variations of the core components temperature, density and dimensions. As a result of calculation, transient distribution of the coolant velocity and density was determined as well as temperatures of the fuel pins, reactor core and primary circuit as a whole. Development of the code during further 6 years period was aimed at the modification of the models describing thermal hydraulic characteristics of the reactor, and in particular in detailed description of the sodium boiling process. The GRIF-SM code was carefully validated against FZK experimental data on steady state sodium boiling in the electrically heated tube; transient sodium boiling in the 7-pin bundle; transient sodium boiling in the 37-pin bundle under flow redaction simulating ULOF accident. To show the code capabilities some results of code application for beyond design basis accident analysis on BN-800-type reactor are presented. (author)

  18. Usage of burnt fuel isotopic compositions from engineering codes in Monte-Carlo code calculations

    International Nuclear Information System (INIS)

    Aleshin, Sergey S.; Gorodkov, Sergey S.; Shcherenko, Anna I.

    2015-01-01

    A burn-up calculation of VVER's cores by Monte-Carlo code is complex process and requires large computational costs. This fact makes Monte-Carlo codes usage complicated for project and operating calculations. Previously prepared isotopic compositions are proposed to use for the Monte-Carlo code (MCU) calculations of different states of VVER's core with burnt fuel. Isotopic compositions are proposed to calculate by an approximation method. The approximation method is based on usage of a spectral functionality and reference isotopic compositions, that are calculated by engineering codes (TVS-M, PERMAK-A). The multiplication factors and power distributions of FA and VVER with infinite height are calculated in this work by the Monte-Carlo code MCU using earlier prepared isotopic compositions. The MCU calculation data were compared with the data which were obtained by engineering codes.

  19. Narrative and emotion process in psychotherapy: an empirical test of the Narrative-Emotion Process Coding System (NEPCS).

    Science.gov (United States)

    Boritz, Tali Z; Bryntwick, Emily; Angus, Lynne; Greenberg, Leslie S; Constantino, Michael J

    2014-01-01

    While the individual contributions of narrative and emotion processes to psychotherapy outcome have been the focus of recent interest in psychotherapy research literature, the empirical analysis of narrative and emotion integration has rarely been addressed. The Narrative-Emotion Processes Coding System (NEPCS) was developed to provide researchers with a systematic method for identifying specific narrative and emotion process markers, for application to therapy session videos. The present study examined the relationship between NEPCS-derived problem markers (same old storytelling, empty storytelling, unstoried emotion, abstract storytelling) and change markers (competing plotlines storytelling, inchoate storytelling, unexpected outcome storytelling, and discovery storytelling), and treatment outcome (recovered versus unchanged at therapy termination) and stage of therapy (early, middle, late) in brief emotion-focused (EFT), client-centred (CCT), and cognitive (CT) therapies for depression. Hierarchical linear modelling analyses demonstrated a significant Outcome effect for inchoate storytelling (p = .037) and discovery storytelling (p = .002), a Stage × Outcome effect for abstract storytelling (p = .05), and a Stage × Outcome × Treatment effect for competing plotlines storytelling (p = .001). There was also a significant Stage × Outcome effect for NEPCS problem markers (p = .007) and change markers (p = .03). The results provide preliminary support for the importance of assessing the contribution of narrative-emotion processes to efficacious treatment outcomes in EFT, CCT, and CT treatments of depression.

  20. CXSFIT Code Application to Process Charge-Exchange Recombination Spectroscopy Data at the T-10 Tokamak

    Science.gov (United States)

    Serov, S. V.; Tugarinov, S. N.; Klyuchnikov, L. A.; Krupin, V. A.; von Hellermann, M.

    2017-12-01

    The applicability of the CXSFIT code to process experimental data from Charge-eXchange Recombination Spectroscopy (CXRS) diagnostics at the T-10 tokamak is studied with a view to its further use for processing experimental data at the ITER facility. The design and operating principle of the CXRS diagnostics are described. The main methods for processing the CXRS spectra of the 5291-Å line of C5+ ions at the T-10 tokamak (with and without subtraction of parasitic emission from the edge plasma) are analyzed. The method of averaging the CXRS spectra over several shots, which is used at the T-10 tokamak to increase the signal-to-noise ratio, is described. The approximation of the spectrum by a set of Gaussian components is used to identify the active CXRS line in the measured spectrum. Using the CXSFIT code, the ion temperature in ohmic discharges and discharges with auxiliary electron cyclotron resonance heating (ECRH) at the T-10 tokamak is calculated from the CXRS spectra of the 5291-Å line. The time behavior of the ion temperature profile in different ohmic heating modes is studied. The temperature profile dependence on the ECRH power is measured, and the dynamics of ECR removal of carbon nuclei from the T-10 plasma is described. Experimental data from the CXRS diagnostics at T-10 substantially contribute to the implementation of physical programs of studies on heat and particle transport in tokamak plasmas and investigation of geodesic acoustic mode properties.

  1. Radioactive hospital wastes. Radiations under control

    International Nuclear Information System (INIS)

    Bondeelle, A.; Delmotte, H.; Gauron, C.

    2006-07-01

    A set of articles proposes an overview of legal and regulatory evolutions regarding radioactive hospital wastes. These legal measures and evolutions are notably present in the Public Health code, in the Labour code. An article outlines the role of the radiation protection expert in the process of elimination of contaminated wastes (four major steps for this elimination are indicated; peculiarities of the hospital are outlined, as well as control procedures and the importance of training and information). An article describes the specific activity of the Creteil incinerator which comprises a unit for the incineration of care activity wastes under a very constraining regulation

  2. The "Wow! signal" of the terrestrial genetic code

    Science.gov (United States)

    shCherbak, Vladimir I.; Makukov, Maxim A.

    2013-05-01

    It has been repeatedly proposed to expand the scope for SETI, and one of the suggested alternatives to radio is the biological media. Genomic DNA is already used on Earth to store non-biological information. Though smaller in capacity, but stronger in noise immunity is the genetic code. The code is a flexible mapping between codons and amino acids, and this flexibility allows modifying the code artificially. But once fixed, the code might stay unchanged over cosmological timescales; in fact, it is the most durable construct known. Therefore it represents an exceptionally reliable storage for an intelligent signature, if that conforms to biological and thermodynamic requirements. As the actual scenario for the origin of terrestrial life is far from being settled, the proposal that it might have been seeded intentionally cannot be ruled out. A statistically strong intelligent-like "signal" in the genetic code is then a testable consequence of such scenario. Here we show that the terrestrial code displays a thorough precision-type orderliness matching the criteria to be considered an informational signal. Simple arrangements of the code reveal an ensemble of arithmetical and ideographical patterns of the same symbolic language. Accurate and systematic, these underlying patterns appear as a product of precision logic and nontrivial computing rather than of stochastic processes (the null hypothesis that they are due to chance coupled with presumable evolutionary pathways is rejected with P-value < 10-13). The patterns are profound to the extent that the code mapping itself is uniquely deduced from their algebraic representation. The signal displays readily recognizable hallmarks of artificiality, among which are the symbol of zero, the privileged decimal syntax and semantical symmetries. Besides, extraction of the signal involves logically straightforward but abstract operations, making the patterns essentially irreducible to any natural origin. Plausible ways of

  3. Integrated Numerical Experiments (INEX) and the Free-Electron Laser Physical Process Code (FELPPC)

    International Nuclear Information System (INIS)

    Thode, L.E.; Chan, K.C.D.; Schmitt, M.J.; McKee, J.; Ostic, J.; Elliott, C.J.; McVey, B.D.

    1990-01-01

    The strong coupling of subsystem elements, such as the accelerator, wiggler, and optics, greatly complicates the understanding and design of a free electron laser (FEL), even at the conceptual level. To address the strong coupling character of the FEL the concept of an Integrated Numerical Experiment (INEX) was proposed. Unique features of the INEX approach are consistency and numerical equivalence of experimental diagnostics. The equivalent numerical diagnostics mitigates the major problem of misinterpretation that often occurs when theoretical and experimental data are compared. The INEX approach has been applied to a large number of accelerator and FEL experiments. Overall, the agreement between INEX and the experiments is very good. Despite the success of INEX, the approach is difficult to apply to trade-off and initial design studies because of the significant manpower and computational requirements. On the other hand, INEX provides a base from which realistic accelerator, wiggler, and optics models can be developed. The Free Electron Laser Physical Process Code (FELPPC) includes models developed from INEX, provides coupling between the subsystem models, and incorporates application models relevant to a specific trade-off or design study. In other words, FELPPC solves the complete physical process model using realistic physics and technology constraints. Because FELPPC provides a detailed design, a good estimate for the FEL mass, cost, and size can be made from a piece-part count of the FEL. FELPPC requires significant accelerator and FEL expertise to operate. The code can calculate complex FEL configurations including multiple accelerator and wiggler combinations

  4. Formation process of Malaysian modern architecture under influence of nationalism

    OpenAIRE

    宇高, 雄志; 山崎, 大智

    2001-01-01

    This paper examines the Formation Process of Malaysian Modern Architecture under Influence of Nationalism,through the process of independence of Malaysia. The national style as "Malaysian national architecture" which hasengaged on background of political environment under the post colonial situation. Malaysian urban design is alsodetermined under the balance of both of ethnic culture and the national culture. In Malaysia, they decided to choosethe Malay ethnic culture as the national culture....

  5. UNICOS CPC6: Automated Code Generation for Process Control Applications

    CERN Document Server

    Fernandez Adiego, B; Prieto Barreiro, I

    2011-01-01

    The Continuous Process Control package (CPC) is one of the components of the CERN Unified Industrial Control System framework (UNICOS) [1]. As a part of this framework, UNICOS-CPC provides a well defined library of device types, amethodology and a set of tools to design and implement industrial control applications. The new CPC version uses the software factory UNICOS Application Builder (UAB) [2] to develop CPC applications. The CPC component is composed of several platform oriented plugins PLCs and SCADA) describing the structure and the format of the generated code. It uses a resource package where both, the library of device types and the generated file syntax, are defined. The UAB core is the generic part of this software, it discovers and calls dynamically the different plug-ins and provides the required common services. In this paper the UNICOS CPC6 package is introduced. It is composed of several plug-ins: the Instance generator and the Logic generator for both, Siemens and Schneider PLCs, the SCADA g...

  6. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    Science.gov (United States)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  7. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  8. Myths and realities of rateless coding

    KAUST Repository

    Bonello, Nicholas

    2011-08-01

    Fixed-rate and rateless channel codes are generally treated separately in the related research literature and so, a novice in the field inevitably gets the impression that these channel codes are unrelated. By contrast, in this treatise, we endeavor to further develop a link between the traditional fixed-rate codes and the recently developed rateless codes by delving into their underlying attributes. This joint treatment is beneficial for two principal reasons. First, it facilitates the task of researchers and practitioners, who might be familiar with fixed-rate codes and would like to jump-start their understanding of the recently developed concepts in the rateless reality. Second, it provides grounds for extending the use of the well-understood codedesign tools-originally contrived for fixed-rate codes-to the realm of rateless codes. Indeed, these versatile tools proved to be vital in the design of diverse fixed-rate-coded communications systems, and thus our hope is that they will further elucidate the associated performance ramifications of the rateless coded schemes. © 2011 IEEE.

  9. Interface of RETRAN/MASTER Code System for APR1400

    International Nuclear Information System (INIS)

    Ku, Keuk Jong; Kang, Sang Hee; Kim, Han Gon

    2008-01-01

    MASTER(Multi-purpose Analyzer for Static and Transient Effects of Reactors), which was developed by KAERI, is a nuclear analysis and design code which can simulate the pressurized water reactor core or boiling water reactor core in 3-dimensional geometry. RETRAN is a best-estimate code for transient analysis of Non-LOCA. RETRAN code generates neutron number density in core using point kinetics model which includes feedback reactivities and converts the neutron number density into reactor power. It is conventional that RETRAN code for power generation is roughly to extrapolate feedback reactivities which are provided by MASTER code only one time before transient analysis. The purpose of this paper is to interface RETRAN code with MASTER code by real-time processing and to supply adequate feedback reactivities to RETRAN code. So, we develop interface code called MATRAN for real-time feedback reactivity processing. And for the application of MATRAN code, we compare the results of real-time MATRAN code with those of conventional RETRAN/MASTER code

  10. Dress codes and appearance policies: challenges under federal legislation, part 2: title VII of the civil rights act and gender.

    Science.gov (United States)

    Mitchell, Michael S; Koen, Clifford M; Darden, Stephen M

    2014-01-01

    As more and more individuals express themselves with tattoos and body piercings and push the envelope on what is deemed appropriate in the workplace, employers have an increased need for creation and enforcement of reasonable dress codes and appearance policies. As with any employment policy or practice, an appearance policy must be implemented and enforced without regard to an individual's race, color, gender, national origin, religion, disability, age, or other protected status. A policy governing dress and appearance based on the business needs of an employer that is applied fairly and consistently and does not have a disproportionate effect on any protected class will generally be upheld if challenged in court. By examining some of the more common legal challenges to dress codes and how courts have resolved the disputes, health care managers can avoid many potential problems. This article, the second part of a 3-part examination of dress codes and appearance policies, focuses on the issue of gender under the Civil Rights Act of 1964. Pertinent court cases that provide guidance for employers are addressed.

  11. Plasma Separation Process: Betacell (BCELL) code: User's manual. [Bipolar barrier junction

    Energy Technology Data Exchange (ETDEWEB)

    Taherzadeh, M.

    1987-11-13

    The emergence of clearly defined applications for (small or large) amounts of long-life and reliable power sources has given the design and production of betavoltaic systems a new life. Moreover, because of the availability of the plasma separation program, (PSP) at TRW, it is now possible to separate the most desirable radioisotopes for betacell power generating devices. A computer code, named BCELL, has been developed to model the betavoltaic concept by utilizing the available up-to-date source/cell parameters. In this program, attempts have been made to determine the betacell energy device maximum efficiency, degradation due to the emitting source radiation and source/cell lifetime power reduction processes. Additionally, comparison is made between the Schottky and PN junction devices for betacell battery design purposes. Certain computer code runs have been made to determine the JV distribution function and the upper limit of the betacell generated power for specified energy sources. A Ni beta emitting radioisotope was used for the energy source and certain semiconductors were used for the converter subsystem of the betacell system. Some results for a Promethium source are also given here for comparison. 16 refs.

  12. Improvement of the spallation-reaction simulation code by considering both the high-momentum intranuclear nucleons and the preequilibrium process

    International Nuclear Information System (INIS)

    Ishibashi, K.; Miura, Y.; Sakae, T.

    1990-01-01

    In the present study, intranuclear nucleons with a high momentum are introduced into intranuclear cascade calculation, and the preequilibrium effects are considered at the end of the cascade process. The improvements made in the HETC (High Energy Transport Code) are outlined, focusing on intranuclear nucleons with a high momentum, and termination of the intranuclear cascade process. Discussion is made of the cutoff energy, and Monte Carlo calculations based on an excitation model are presented and analyzed. The experimental high energy neutrons in the backward direction are successfully reproduced. The preequilibrium effect is considered in a local manner, and this is introduced as a simple probability density function for terminating the intranuclear cascade process. The resultant neutron spectra reproduce the shoulders of the experimental data in the region of 20 to 50 MeV. The exciton model is coded with a Monte Carlo algorithm. The results of the exciton model calculation is not so appreciable except for intermediate energy neutrons in the backward direction. (N.K.)

  13. Monte Carlo codes and Monte Carlo simulator program

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.

    1990-03-01

    Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)

  14. OCA Code Enforcement

    Data.gov (United States)

    Montgomery County of Maryland — The Office of the County Attorney (OCA) processes Code Violation Citations issued by County agencies. The citations can be viewed by issued department, issued date...

  15. DIONISIO 2.0: New version of the code for simulating a whole nuclear fuel rod under extended irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Soba, Alejandro, E-mail: soba@cnea.gov.ar; Denis, Alicia

    2015-10-15

    Highlights: • A new version of the DIONISIO code is developed. • DIONISIO is devoted to simulating the behavior of a nuclear fuel rod in operation. • The formerly two-dimensional simulation of a pellet-cladding segment is now extended to the whole rod length. • An acceptable and more realistic agreement with experimental data is obtained. • The prediction range of our code is extended up to average burnup of 60 MWd/kgU. - Abstract: The version 2.0 of the DIONISIO code, that incorporates diverse new aspects, has been recently developed. One of them is referred to the code architecture that allows taking into account the axial variation of the conditions external to the rod. With this purpose, the rod is divided into a number of axial segments. In each one the program considers the system formed by a pellet and the corresponding cladding portion and solves the numerous phenomena that take place under the local conditions of linear power and coolant temperature, which are given as input parameters. To do this a bi-dimensional domain in the r–z plane is considered where cylindrical symmetry and also symmetry with respect to the pellet mid-plane are assumed. The results obtained for this representative system are assumed valid for the complete segment. The program thus produces in each rod section the values of the temperature, stress, strain, among others as outputs, as functions of the local coordinates r and z. Then, the general rod parameters (internal rod pressure, amount of fission gas released, pellet stack elongation, etc.) are evaluated. Moreover, new calculation tools designed to extend the application range of the code to high burnup, which were reported elsewhere, have also been incorporated to DIONISIO 2.0 in recent times. With these improvements, the code results are compared with some 33 experiments compiled in the IFPE data base, that cover more than 380 fuel rods irradiated up to average burnup levels of 40–60 MWd/kgU. The results of these

  16. DIONISIO 2.0: New version of the code for simulating a whole nuclear fuel rod under extended irradiation

    International Nuclear Information System (INIS)

    Soba, Alejandro; Denis, Alicia

    2015-01-01

    Highlights: • A new version of the DIONISIO code is developed. • DIONISIO is devoted to simulating the behavior of a nuclear fuel rod in operation. • The formerly two-dimensional simulation of a pellet-cladding segment is now extended to the whole rod length. • An acceptable and more realistic agreement with experimental data is obtained. • The prediction range of our code is extended up to average burnup of 60 MWd/kgU. - Abstract: The version 2.0 of the DIONISIO code, that incorporates diverse new aspects, has been recently developed. One of them is referred to the code architecture that allows taking into account the axial variation of the conditions external to the rod. With this purpose, the rod is divided into a number of axial segments. In each one the program considers the system formed by a pellet and the corresponding cladding portion and solves the numerous phenomena that take place under the local conditions of linear power and coolant temperature, which are given as input parameters. To do this a bi-dimensional domain in the r–z plane is considered where cylindrical symmetry and also symmetry with respect to the pellet mid-plane are assumed. The results obtained for this representative system are assumed valid for the complete segment. The program thus produces in each rod section the values of the temperature, stress, strain, among others as outputs, as functions of the local coordinates r and z. Then, the general rod parameters (internal rod pressure, amount of fission gas released, pellet stack elongation, etc.) are evaluated. Moreover, new calculation tools designed to extend the application range of the code to high burnup, which were reported elsewhere, have also been incorporated to DIONISIO 2.0 in recent times. With these improvements, the code results are compared with some 33 experiments compiled in the IFPE data base, that cover more than 380 fuel rods irradiated up to average burnup levels of 40–60 MWd/kgU. The results of these

  17. DIONISIO 2.0: new version of the code for simulating the behavior of a power fuel rod under irradiation

    International Nuclear Information System (INIS)

    Soba, A; Denis, A; Lemes, M; Gonzalez, M E

    2012-01-01

    During the latest ten years the Codes and Models Section of the Nuclear Fuel Cycle Department has been developing the DIONISIO code, which simulates most of the main phenomena that take place within a fuel rod during the normal operation of a nuclear reactor: temperature distribution, thermal expansion, elastic and plastic strain, creep, irradiation growth, pellet-cladding mechanical interaction, fission gas release, swelling and densification. Axial symmetry is assumed and cylindrical finite elements are used to discretized the domain. The code has a modular structure and contains more than 40 interconnected models. A group of subroutines, designed to extend the application range of the fuel performance code DIONISIO to high burn up, has recently been included in the code. The new calculation tools, which are tuned for UO 2 fuels in LWR conditions, predict the radial distribution of power density, burnup and concentration of diverse nuclides within the pellet. New models of porosity and fission gas release in the rim, as well as the influence of the microstructure of this zone on the thermal conductivity of the pellet, are presently under development. A considerable computational challenge was the inclusion of the option of simulating the whole bar, by dividing it in a number of axial segments, at the user's choice, and solving in each segment the complete problem. All the general rod parameters (pressure, fission gas release, volume, etc.) are evaluated at the end of every time step. This modification allows taking into account the axial variation of the linear power and, consequently, evaluating the dependence of all the significant rod parameters with that coordinate. DIONISIO was elected for participating in the FUMEX III project of codes intercomparison, organized by IAEA, from 2008 to 2011. The results of the simulations performed within this project were compared with more than 30 experiments that involve more than 150 irradiated rods. The high number

  18. Automated processing of thermal infrared images of Osservatorio Vesuviano permanent surveillance network by using Matlab code

    Science.gov (United States)

    Sansivero, Fabio; Vilardo, Giuseppe; Caputo, Teresa

    2017-04-01

    The permanent thermal infrared surveillance network of Osservatorio Vesuviano (INGV) is composed of 6 stations which acquire IR frames of fumarole fields in the Campi Flegrei caldera and inside the Vesuvius crater (Italy). The IR frames are uploaded to a dedicated server in the Surveillance Center of Osservatorio Vesuviano in order to process the infrared data and to excerpt all the information contained. In a first phase the infrared data are processed by an automated system (A.S.I.R.A. Acq- Automated System of IR Analysis and Acquisition) developed in Matlab environment and with a user-friendly graphic user interface (GUI). ASIRA daily generates time-series of residual temperature values of the maximum temperatures observed in the IR scenes after the removal of seasonal effects. These time-series are displayed in the Surveillance Room of Osservatorio Vesuviano and provide information about the evolution of shallow temperatures field of the observed areas. In particular the features of ASIRA Acq include: a) efficient quality selection of IR scenes, b) IR images co-registration in respect of a reference frame, c) seasonal correction by using a background-removal methodology, a) filing of IR matrices and of the processed data in shared archives accessible to interrogation. The daily archived records can be also processed by ASIRA Plot (Matlab code with GUI) to visualize IR data time-series and to help in evaluating inputs parameters for further data processing and analysis. Additional processing features are accomplished in a second phase by ASIRA Tools which is Matlab code with GUI developed to extract further information from the dataset in automated way. The main functions of ASIRA Tools are: a) the analysis of temperature variations of each pixel of the IR frame in a given time interval, b) the removal of seasonal effects from temperature of every pixel in the IR frames by using an analytic approach (removal of sinusoidal long term seasonal component by using a

  19. Unfolding code for neutron spectrometry based on neural nets technology

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M.; Vega C, H. R., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica, Apdo. Postal 336, 98000 Zacatecas (Mexico)

    2012-10-15

    The most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Neural Networks have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This unfolding code called Neutron Spectrometry and Dosimetry by means of Artificial Neural Networks was designed in a graphical interface under LabVIEW programming environment. The core of the code is an embedded neural network architecture, previously optimized by the {sup R}obust Design of Artificial Neural Networks Methodology{sup .} The main features of the code are: is easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a {sup 6}Lil(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, only seven rate counts measurement with a Bonner spheres spectrometer are required for simultaneously unfold the 60 energy bins of the neutron spectrum and to calculate 15 dosimetric quantities, for radiation protection porpoises. This code generates a full report in html format with all relevant information. (Author)

  20. [Implications of mental image processing in the deficits of verbal information coding during normal aging].

    Science.gov (United States)

    Plaie, Thierry; Thomas, Delphine

    2008-06-01

    Our study specifies the contributions of image generation and image maintenance processes occurring at the time of imaginal coding of verbal information in memory during normal aging. The memory capacities of 19 young adults (average age of 24 years) and 19 older adults (average age of 75 years) were assessed using recall tasks according to the imagery value of the stimuli to learn. The mental visual imagery capacities are assessed using tasks of image generation and temporary storage of mental imagery. The variance analysis indicates a more important decrease with age of the concretness effect. The major contribution of our study rests on the fact that the decline with age of dual coding of verbal information in memory would result primarily from the decline of image maintenance capacities and from a slowdown in image generation. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  1. Benchmark studies of BOUT++ code and TPSMBI code on neutral transport during SMBI

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Y.H. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); University of Science and Technology of China, Hefei 230026 (China); Center for Magnetic Fusion Theory, Chinese Academy of Sciences, Hefei 230031 (China); Wang, Z.H., E-mail: zhwang@swip.ac.cn [Southwestern Institute of Physics, Chengdu 610041 (China); Guo, W., E-mail: wfguo@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Center for Magnetic Fusion Theory, Chinese Academy of Sciences, Hefei 230031 (China); Ren, Q.L. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Sun, A.P.; Xu, M.; Wang, A.K. [Southwestern Institute of Physics, Chengdu 610041 (China); Xiang, N. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Center for Magnetic Fusion Theory, Chinese Academy of Sciences, Hefei 230031 (China)

    2017-06-09

    SMBI (supersonic molecule beam injection) plays an important role in tokamak plasma fuelling, density control and ELM mitigation in magnetic confinement plasma physics, which has been widely used in many tokamaks. The trans-neut module of BOUT++ code is the only large-scale parallel 3D fluid code used to simulate the SMBI fueling process, while the TPSMBI (transport of supersonic molecule beam injection) code is a recent developed 1D fluid code of SMBI. In order to find a method to increase SMBI fueling efficiency in H-mode plasma, especially for ITER, it is significant to first verify the codes. The benchmark study between the trans-neut module of BOUT++ code and the TPSMBI code on radial transport dynamics of neutral during SMBI has been first successfully achieved in both slab and cylindrical coordinates. The simulation results from the trans-neut module of BOUT++ code and TPSMBI code are consistent very well with each other. Different upwind schemes have been compared to deal with the sharp gradient front region during the inward propagation of SMBI for the code stability. The influence of the WENO3 (weighted essentially non-oscillatory) and the third order upwind schemes on the benchmark results has also been discussed. - Highlights: • A 1D model of SMBI has developed. • Benchmarks of BOUT++ and TPSMBI codes have first been finished. • The influence of the WENO3 and the third order upwind schemes on the benchmark results has also been discussed.

  2. Further validation of liquid metal MHD code for unstructured grid based on OpenFOAM

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Jingchao; Chen, Hongli, E-mail: hlchen1@ustc.edu.cn; He, Qingyun; Ye, Minyou

    2015-11-15

    Highlights: • Specific correction scheme has been adopted to revise the calculating result for non-orthogonal meshes. • The developed MHD code based on OpenFOAM platform has been validated by benchmark cases under uniform and non-uniform magnetic field in round and rectangular ducts. • ALEX experimental results have been used to validate the MHD code based on OpenFOAM. - Abstract: In fusion liquid metal blankets, complex geometries involving contractions, expansions, bends, manifolds are very common. The characteristics of liquid metal flow in these geometries are significant. In order to extend the magnetohydrodynamic (MHD) solver developed on OpenFOAM platform to be applied in the complex geometry, the MHD solver based on unstructured meshes has been implemented. The adoption of non-orthogonal correction techniques in the solver makes it possible to process the non-orthogonal meshes in complex geometries. The present paper focused on the validation of the code under critical conditions. An analytical solution benchmark case and two experimental benchmark cases were conducted to validate the code. Benchmark case I is MHD flow in a circular pipe with arbitrary electric conductivity of the walls in a uniform magnetic field. Benchmark cases II and III are experimental cases of 3D laminar steady MHD flow under fringing magnetic field. In all these cases, the numerical results match well with the benchmark cases.

  3. Further validation of liquid metal MHD code for unstructured grid based on OpenFOAM

    International Nuclear Information System (INIS)

    Feng, Jingchao; Chen, Hongli; He, Qingyun; Ye, Minyou

    2015-01-01

    Highlights: • Specific correction scheme has been adopted to revise the calculating result for non-orthogonal meshes. • The developed MHD code based on OpenFOAM platform has been validated by benchmark cases under uniform and non-uniform magnetic field in round and rectangular ducts. • ALEX experimental results have been used to validate the MHD code based on OpenFOAM. - Abstract: In fusion liquid metal blankets, complex geometries involving contractions, expansions, bends, manifolds are very common. The characteristics of liquid metal flow in these geometries are significant. In order to extend the magnetohydrodynamic (MHD) solver developed on OpenFOAM platform to be applied in the complex geometry, the MHD solver based on unstructured meshes has been implemented. The adoption of non-orthogonal correction techniques in the solver makes it possible to process the non-orthogonal meshes in complex geometries. The present paper focused on the validation of the code under critical conditions. An analytical solution benchmark case and two experimental benchmark cases were conducted to validate the code. Benchmark case I is MHD flow in a circular pipe with arbitrary electric conductivity of the walls in a uniform magnetic field. Benchmark cases II and III are experimental cases of 3D laminar steady MHD flow under fringing magnetic field. In all these cases, the numerical results match well with the benchmark cases.

  4. Normalized value coding explains dynamic adaptation in the human valuation process.

    Science.gov (United States)

    Khaw, Mel W; Glimcher, Paul W; Louie, Kenway

    2017-11-28

    The notion of subjective value is central to choice theories in ecology, economics, and psychology, serving as an integrated decision variable by which options are compared. Subjective value is often assumed to be an absolute quantity, determined in a static manner by the properties of an individual option. Recent neurobiological studies, however, have shown that neural value coding dynamically adapts to the statistics of the recent reward environment, introducing an intrinsic temporal context dependence into the neural representation of value. Whether valuation exhibits this kind of dynamic adaptation at the behavioral level is unknown. Here, we show that the valuation process in human subjects adapts to the history of previous values, with current valuations varying inversely with the average value of recently observed items. The dynamics of this adaptive valuation are captured by divisive normalization, linking these temporal context effects to spatial context effects in decision making as well as spatial and temporal context effects in perception. These findings suggest that adaptation is a universal feature of neural information processing and offer a unifying explanation for contextual phenomena in fields ranging from visual psychophysics to economic choice.

  5. The modeling of fuel rod behaviour under RIA conditions in the code DYN3D

    International Nuclear Information System (INIS)

    Rohde, U.

    1998-01-01

    A description of the fuel rod behaviour and heat transfer model used in the code DYN3D for nuclear reactor core dynamic simulations is given. Besides the solution of heat conduction equations in fuel and cladding, the model comprises detailed description of heat transfer in the gas gap by conduction, radiation and fuel-cladding contact. The gas gap behaviour is modeled in a mechanistic way taking into account transient changes of the gas gap parameters based on given conditions for the initial state. Thermal, elastic and plastic deformations of fuel and cladding are taken into account within 1D approximation. Numerical studies concerning the fuel rod behaviour under RIA conditions in power reactors are reported. Fuel rod behaviour at high pressures and flow rates in power reactors is different from the behaviour under atmospheric pressure and stagnant flow conditions in the experiments. The mechanisms of fuel rod failure for fresh and burned fuel reported from the literature can be qualitatively reproduced by the DYN3D model. (author)

  6. On the Organizational Dynamics of the Genetic Code

    KAUST Repository

    Zhang, Zhang

    2011-06-07

    The organization of the canonical genetic code needs to be thoroughly illuminated. Here we reorder the four nucleotides—adenine, thymine, guanine and cytosine—according to their emergence in evolution, and apply the organizational rules to devising an algebraic representation for the canonical genetic code. Under a framework of the devised code, we quantify codon and amino acid usages from a large collection of 917 prokaryotic genome sequences, and associate the usages with its intrinsic structure and classification schemes as well as amino acid physicochemical properties. Our results show that the algebraic representation of the code is structurally equivalent to a content-centric organization of the code and that codon and amino acid usages under different classification schemes were correlated closely with GC content, implying a set of rules governing composition dynamics across a wide variety of prokaryotic genome sequences. These results also indicate that codons and amino acids are not randomly allocated in the code, where the six-fold degenerate codons and their amino acids have important balancing roles for error minimization. Therefore, the content-centric code is of great usefulness in deciphering its hitherto unknown regularities as well as the dynamics of nucleotide, codon, and amino acid compositions.

  7. On the Organizational Dynamics of the Genetic Code

    KAUST Repository

    Zhang, Zhang; Yu, Jun

    2011-01-01

    The organization of the canonical genetic code needs to be thoroughly illuminated. Here we reorder the four nucleotides—adenine, thymine, guanine and cytosine—according to their emergence in evolution, and apply the organizational rules to devising an algebraic representation for the canonical genetic code. Under a framework of the devised code, we quantify codon and amino acid usages from a large collection of 917 prokaryotic genome sequences, and associate the usages with its intrinsic structure and classification schemes as well as amino acid physicochemical properties. Our results show that the algebraic representation of the code is structurally equivalent to a content-centric organization of the code and that codon and amino acid usages under different classification schemes were correlated closely with GC content, implying a set of rules governing composition dynamics across a wide variety of prokaryotic genome sequences. These results also indicate that codons and amino acids are not randomly allocated in the code, where the six-fold degenerate codons and their amino acids have important balancing roles for error minimization. Therefore, the content-centric code is of great usefulness in deciphering its hitherto unknown regularities as well as the dynamics of nucleotide, codon, and amino acid compositions.

  8. Fission Product Transport Models Adopted in REFPAC Code for LOCA Conditions in PWR and WWER NPPS

    International Nuclear Information System (INIS)

    Strupczewski, A.

    2003-01-01

    The report presents assumptions and physical models used for calculations of fission product releases from nuclear reactors, their behavior inside the containment and leakages to the environment after large break loss of coolant accident LB LOCA. They are the basis of code REFPAC (RElease of Fission Products under Accident Conditions), designed primarily to represent significant physical processes occurring after LB LOCA. The code describes these processes using three different models. Model 1 corresponds to established US and Russian practice, Model 2 includes all conservative assumptions that are in agreement with the actual state-of-the-art, and Model 3 incorporates formulae and parameter values actually used in EU practice. (author)

  9. Vectorization of nuclear codes 90-1

    International Nuclear Information System (INIS)

    Nonomiya, Iwao; Nemoto, Toshiyuki; Ishiguro, Misako; Harada, Hiroo; Hori, Takeo.

    1990-09-01

    The vectorization has been made for four codes: SONATINA-2V HTTR version, TRIDOSE, VIENUS, and SCRYU. SONATINA-2V HTTR version is a code for analyzing the dynamic behavior of fuel blocks in the vertical slice of the HTGR (High Temperature Gas-cooled Reactor) core under seismic perturbation, TRIDOSE is a code for calculating environmental tritium concentration and dose, VIENUS is a code for analyzing visco elastic stress of the fuel block of HTTR (High Temperature gas-cooled Test Reactor), and SCRYU is a thermal-hydraulics code with boundary fitted coordinate system. The total speedup ratio of the vectorized versions to the original scalar ones is 5.2 for SONATINA-2V HTTR version. 5.9 ∼ 6.9 for TRIDOSE, 6.7 for VIENUS, 7.6 for SCRYU, respectively. In this report, we describe outline of codes, techniques used for the vectorization, verification of computed results, and speedup effect on the vectorized codes. (author)

  10. Single-shot secure quantum network coding on butterfly network with free public communication

    Science.gov (United States)

    Owari, Masaki; Kato, Go; Hayashi, Masahito

    2018-01-01

    Quantum network coding on the butterfly network has been studied as a typical example of quantum multiple cast network. We propose a secure quantum network code for the butterfly network with free public classical communication in the multiple unicast setting under restricted eavesdropper’s power. This protocol certainly transmits quantum states when there is no attack. We also show the secrecy with shared randomness as additional resource when the eavesdropper wiretaps one of the channels in the butterfly network and also derives the information sending through public classical communication. Our protocol does not require verification process, which ensures single-shot security.

  11. A method for modeling co-occurrence propensity of clinical codes with application to ICD-10-PCS auto-coding.

    Science.gov (United States)

    Subotin, Michael; Davis, Anthony R

    2016-09-01

    Natural language processing methods for medical auto-coding, or automatic generation of medical billing codes from electronic health records, generally assign each code independently of the others. They may thus assign codes for closely related procedures or diagnoses to the same document, even when they do not tend to occur together in practice, simply because the right choice can be difficult to infer from the clinical narrative. We propose a method that injects awareness of the propensities for code co-occurrence into this process. First, a model is trained to estimate the conditional probability that one code is assigned by a human coder, given than another code is known to have been assigned to the same document. Then, at runtime, an iterative algorithm is used to apply this model to the output of an existing statistical auto-coder to modify the confidence scores of the codes. We tested this method in combination with a primary auto-coder for International Statistical Classification of Diseases-10 procedure codes, achieving a 12% relative improvement in F-score over the primary auto-coder baseline. The proposed method can be used, with appropriate features, in combination with any auto-coder that generates codes with different levels of confidence. The promising results obtained for International Statistical Classification of Diseases-10 procedure codes suggest that the proposed method may have wider applications in auto-coding. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Asymmetric Spatial Processing Under Cognitive Load.

    Science.gov (United States)

    Naert, Lien; Bonato, Mario; Fias, Wim

    2018-01-01

    Spatial attention allows us to selectively process information within a certain location in space. Despite the vast literature on spatial attention, the effect of cognitive load on spatial processing is still not fully understood. In this study we added cognitive load to a spatial processing task, so as to see whether it would differentially impact upon the processing of visual information in the left versus the right hemispace. The main paradigm consisted of a detection task that was performed during the maintenance interval of a verbal working memory task. We found that increasing cognitive working memory load had a more negative impact on detecting targets presented on the left side compared to those on the right side. The strength of the load effect correlated with the strength of the interaction on an individual level. The implications of an asymmetric attentional bias with a relative disadvantage for the left (vs the right) hemispace under high verbal working memory (WM) load are discussed.

  13. Asymmetric Spatial Processing Under Cognitive Load

    Directory of Open Access Journals (Sweden)

    Lien Naert

    2018-04-01

    Full Text Available Spatial attention allows us to selectively process information within a certain location in space. Despite the vast literature on spatial attention, the effect of cognitive load on spatial processing is still not fully understood. In this study we added cognitive load to a spatial processing task, so as to see whether it would differentially impact upon the processing of visual information in the left versus the right hemispace. The main paradigm consisted of a detection task that was performed during the maintenance interval of a verbal working memory task. We found that increasing cognitive working memory load had a more negative impact on detecting targets presented on the left side compared to those on the right side. The strength of the load effect correlated with the strength of the interaction on an individual level. The implications of an asymmetric attentional bias with a relative disadvantage for the left (vs the right hemispace under high verbal working memory (WM load are discussed.

  14. LDPC-PPM Coding Scheme for Optical Communication

    Science.gov (United States)

    Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael

    2009-01-01

    In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.

  15. A quantum algorithm for Viterbi decoding of classical convolutional codes

    OpenAIRE

    Grice, Jon R.; Meyer, David A.

    2014-01-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper the proposed algorithm is applied to decoding classical convolutional codes, for instance; large constraint length $Q$ and short decode frames $N$. Other applications of the classical Viterbi algorithm where $Q$ is large (e.g. speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butter...

  16. Development and verification of coupled fluid-structural dynamic codes for stress analysis of reactor vessel internals under blowdown loading

    International Nuclear Information System (INIS)

    Krieg, R.; Schlechtendahl, E.G.

    1977-01-01

    YAQUIR has been applied to large PWR blowdown problems and compared with LECK results. The structural model of CYLDY2 and the fluid model of YAQUIR have been coupled in the code STRUYA. First tests with the fluid dynamic systems code FLUST have been successful. The incompressible fluid version of the 3D coupled code FLUX for HDR-geometry was checked against some analytical test cases and was used for evaluation of the eigenfrequencies of the coupled system. Several test cases were run with the two phase flow code SOLA-DF with satisfactory results. Remarkable agreement was found between YAQUIR results and experimental data obtained from shallow water analogy experiments. A test for investigation of nonequilibrium twophase flow dynamics has been specified in some detail. The test is to be performed early 1978 in the water loop of the IRB. Good agreement was found between the natural frequency predictions for the core barrel obtained from CYLDY2 and STRUDL/DYNAL. Work started on improvement of the beam mode treatment in CYLDY2. The name of this modified version will be CYLDY3. The fluiddynamic code SING1, based on an advanced singularity method and applicable to a broad class of highly transient, incompressible 3D-problems with negligible viscosity has been developed and tested. It will be used in connection with the planned laboratory experiments in order to investigate the effect of the core structure on the blowdown process. Coupling of SING1 with structural dynamics is on the way. (orig./RW) [de

  17. Construction and Iterative Decoding of LDPC Codes Over Rings for Phase-Noisy Channels

    Directory of Open Access Journals (Sweden)

    Karuppasami Sridhar

    2008-01-01

    Full Text Available Abstract This paper presents the construction and iterative decoding of low-density parity-check (LDPC codes for channels affected by phase noise. The LDPC code is based on integer rings and designed to converge under phase-noisy channels. We assume that phase variations are small over short blocks of adjacent symbols. A part of the constructed code is inherently built with this knowledge and hence able to withstand a phase rotation of radians, where " " is the number of phase symmetries in the signal set, that occur at different observation intervals. Another part of the code estimates the phase ambiguity present in every observation interval. The code makes use of simple blind or turbo phase estimators to provide phase estimates over every observation interval. We propose an iterative decoding schedule to apply the sum-product algorithm (SPA on the factor graph of the code for its convergence. To illustrate the new method, we present the performance results of an LDPC code constructed over with quadrature phase shift keying (QPSK modulated signals transmitted over a static channel, but affected by phase noise, which is modeled by the Wiener (random-walk process. The results show that the code can withstand phase noise of standard deviation per symbol with small loss.

  18. Study of counter current flow limitation model of MARS-KS and SPACE codes under Dukler's air/water flooding test conditions

    International Nuclear Information System (INIS)

    Lee, Won Woong; Kim, Min Gil; Lee, Jeong Ik; Bang, Young Seok

    2015-01-01

    In particular, CCFL(the counter current flow limitation) occurs in components such as hot leg, downcomer annulus and steam generator inlet plenum during LOCA which is possible to have flows in two opposite directions. Therefore, CCFL is one of the thermal-hydraulic models which has significant effect on the reactor safety analysis code performance. In this study, the CCFL model will be evaluated with MARS-KS based on two-phase two-field governing equations and SPACE code based on two-phase three-field governing equations. This study will be conducted by comparing MARS-KS code which is being used for evaluating the safety of a Korean Nuclear Power Plant and SPACE code which is currently under assessment for evaluating the safety of the designed nuclear power plant. In this study, comparison of the results of liquid upflow and liquid downflow rate for different gas flow rate from two code to the famous Dukler's CCFL experimental data are presented. This study will be helpful to understand the difference between system analysis codes with different governing equations, models and correlations, and further improving the accuracy of system analysis codes. In the nuclear reactor system, CCFL is an important phenomenon for evaluating the safety of nuclear reactors. This is because CCFL phenomenon can limit injection of ECCS water when CCFL occurs in components such as hot leg, downcomer annulus or steam generator inlet plenum during LOCA which is possible to flow in two opposite directions. Therefore, CCFL is one of the thermal-hydraulic models which has significant effect on the reactor safety analysis code performance. In this study, the CCFL model was evaluated with MARS-KS and SPACE codes for studying the difference between system analysis codes with different governing equations, models and correlations. This study was conducted by comparing MARS-KS and SPACE code results of liquid upflow and liquid downflow rate for different gas flow rate to the famous Dukler

  19. Association of Postoperative Readmissions With Surgical Quality Using a Delphi Consensus Process to Identify Relevant Diagnosis Codes.

    Science.gov (United States)

    Mull, Hillary J; Graham, Laura A; Morris, Melanie S; Rosen, Amy K; Richman, Joshua S; Whittle, Jeffery; Burns, Edith; Wagner, Todd H; Copeland, Laurel A; Wahl, Tyler; Jones, Caroline; Hollis, Robert H; Itani, Kamal M F; Hawn, Mary T

    2018-04-18

    Postoperative readmission data are used to measure hospital performance, yet the extent to which these readmissions reflect surgical quality is unknown. To establish expert consensus on whether reasons for postoperative readmission are associated with the quality of surgery in the index admission. In a modified Delphi process, a panel of 14 experts in medical and surgical readmissions comprising physicians and nonphysicians from Veterans Affairs (VA) and private-sector institutions reviewed 30-day postoperative readmissions from fiscal years 2008 through 2014 associated with inpatient surgical procedures performed at a VA medical center between October 1, 2007, and September 30, 2014. The consensus process was conducted from January through May 2017. Reasons for readmission were grouped into categories based on International Classification of Diseases, Ninth Revision (ICD-9) diagnosis codes. Panelists were given the proportion of readmissions coded by each reason and median (interquartile range) days to readmission. They answered the question, "Does the readmission reason reflect possible surgical quality of care problems in the index admission?" on a scale of 1 (never related) to 5 (directly related) in 3 rounds of consensus building. The consensus process was completed in May 2017 and data were analyzed in June 2017. Consensus on proportion of ICD-9-coded readmission reasons that reflected quality of surgical procedure. In 3 Delphi rounds, the 14 panelists achieved consensus on 50 reasons for readmission; 12 panelists also completed group telephone calls between rounds 1 and 2. Readmissions with diagnoses of infection, sepsis, pneumonia, hemorrhage/hematoma, anemia, ostomy complications, acute renal failure, fluid/electrolyte disorders, or venous thromboembolism were considered associated with surgical quality and accounted for 25 521 of 39 664 readmissions (64% of readmissions; 7.5% of 340 858 index surgical procedures). The proportion of readmissions

  20. Grid Code Requirements for Wind Power Integration

    DEFF Research Database (Denmark)

    Wu, Qiuwei

    2018-01-01

    This chapter reviews the grid code requirements for integration of wind power plants (WPPs). The grid codes reviewed are from the UK, Ireland, Germany, Denmark, Spain, Sweden, the USA, and Canada. Transmission system operators (TSOs) around the world have specified requirements for WPPs under...

  1. UNR. A code for processing unresolved resonance data for MCNP

    International Nuclear Information System (INIS)

    Hogenbirk, A.

    1994-09-01

    In neutron transport problems the correct treatment of self-shielding is important for those nuclei present in large concentrations. Monte Carlo calculations using continuous-energy cross section data, such as calculations with the code MCNP, offer the advantage that neutron transport is calculated in a very accurate way. Self-shielding in the resolved resonance region is taken into account exactly in MCNP. However, self-shielding in the unresolved resonance region can not be taken into account by MCNP, although the effect of it may be important in many applications. In this report a description is given of the computer code UNR. With this code problem-dependent cross section libraries can be produced for MCNP. In these libraries self-shielded cross section data in the unresolved resonance range are given, which are produced by NJOY-module UNRESR. It is noted, that the treatment for resonance self-shielding presented in this report is approximate. However, the current version of MCNP does not allow the use of probability tables, which would be a general solution. (orig.)

  2. Models of neural networks temporal aspects of coding and information processing in biological systems

    CERN Document Server

    Hemmen, J; Schulten, Klaus

    1994-01-01

    Since the appearance of Vol. 1 of Models of Neural Networks in 1991, the theory of neural nets has focused on two paradigms: information coding through coherent firing of the neurons and functional feedback. Information coding through coherent neuronal firing exploits time as a cardinal degree of freedom. This capacity of a neural network rests on the fact that the neuronal action potential is a short, say 1 ms, spike, localized in space and time. Spatial as well as temporal correlations of activity may represent different states of a network. In particular, temporal correlations of activity may express that neurons process the same "object" of, for example, a visual scene by spiking at the very same time. The traditional description of a neural network through a firing rate, the famous S-shaped curve, presupposes a wide time window of, say, at least 100 ms. It thus fails to exploit the capacity to "bind" sets of coherently firing neurons for the purpose of both scene segmentation and figure-ground segregatio...

  3. Infinity-Norm Permutation Covering Codes from Cyclic Groups

    OpenAIRE

    Karni, Ronen; Schwartz, Moshe

    2017-01-01

    We study covering codes of permutations with the $\\ell_\\infty$-metric. We provide a general code construction, which uses smaller building-block codes. We study cyclic transitive groups as building blocks, determining their exact covering radius, and showing linear-time algorithms for finding a covering codeword. We also bound the covering radius of relabeled cyclic transitive groups under conjugation.

  4. Fault tree analysis. Implementation of the WAM-codes

    International Nuclear Information System (INIS)

    Bento, J.P.; Poern, K.

    1979-07-01

    The report describes work going on at Studsvik at the implementation of the WAM code package for fault tree analysis. These codes originally developed under EPRI contract by Sciences Applications Inc, allow, in contrast with other fault tree codes, all Boolean operations, thus allowing modeling of ''NOT'' conditions and dependent components. To concretize the implementation of these codes, the auxiliary feed-water system of the Swedish BWR Oskarshamn 2 was chosen for the reliability analysis. For this system, both the mean unavailability and the probability density function of the top event - undesired event - of the system fault tree were calculated, the latter using a Monte-Carlo simulation technique. The present study is the first part of a work performed under contract with the Swedish Nuclear Power Inspectorate. (author)

  5. Modification and validation of the natural heat convection and subcooled void formation models in the code PARET

    International Nuclear Information System (INIS)

    Hainoun, A.; Alhabit, F.; Ghazi, N.

    2008-01-01

    Two new modifications have been included in the current PARET code that is widely applied in the dynamic and safety analysis of research reactors. A new model was implemented for the simulation of void formation in the subcooled boiling regime, the other modification dealt with the implementation of a new approach to improve the prediction of heat transfer coefficient under natural circulation condition. The modified code was successfully validated using adequate single effect tests covering the physical phenomena of interest for both natural circulation and subcooled void formation at low pressure and low heat flux. The validation results indicate significant improvement of the code compared to the default version. Additionally, to simplify the code application an interactive user interface was developed enabling pre and post-processing of the code predictions. (author)

  6. Network Coding Protocols for Smart Grid Communications

    DEFF Research Database (Denmark)

    Prior, Rui; Roetter, Daniel Enrique Lucani; Phulpin, Yannick

    2014-01-01

    We propose a robust network coding protocol for enhancing the reliability and speed of data gathering in smart grids. At the heart of our protocol lies the idea of tunable sparse network coding, which adopts the transmission of sparsely coded packets at the beginning of the transmission process b...

  7. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    Directory of Open Access Journals (Sweden)

    Marinkovic Slavica

    2006-01-01

    Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  8. Phonological coding during reading.

    Science.gov (United States)

    Leinenger, Mallorie

    2014-11-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  9. Durability of switchable QR code carriers under hydrolytic and photolytic conditions

    International Nuclear Information System (INIS)

    Ecker, Melanie; Pretsch, Thorsten

    2013-01-01

    Following a guest diffusion approach, the surface of a shape memory poly(ester urethane) (PEU) was either black or blue colored. Bowtie-shaped quick response (QR) code carriers were then obtained from laser engraving and cutting, before thermo-mechanical functionalization (programming) was applied to stabilize the PEU in a thermo-responsive (switchable) state. The stability of the dye within the polymer surface and long-term functionality of the polymer were investigated against UVA and hydrolytic ageing. Spectrophotometric investigations verified UVA ageing-related color shifts from black to yellow-brownish and blue to petrol-greenish whereas hydrolytically aged samples changed from black to greenish and blue to light blue. In the case of UVA ageing, color changes were accompanied by dye decolorization, whereas hydrolytic ageing led to contrast declines due to dye diffusion. The Michelson contrast could be identified as an effective tool to follow ageing-related contrast changes between surface-dyed and laser-ablated (undyed) polymer regions. As soon as the Michelson contrast fell below a crucial value of 0.1 due to ageing, the QR code was no longer decipherable with a scanning device. Remarkably, the PEU information carrier base material could even then be adequately fixed and recovered. Hence, the surface contrast turned out to be the decisive parameter for QR code carrier applicability. (paper)

  10. Durability of switchable QR code carriers under hydrolytic and photolytic conditions

    Science.gov (United States)

    Ecker, Melanie; Pretsch, Thorsten

    2013-09-01

    Following a guest diffusion approach, the surface of a shape memory poly(ester urethane) (PEU) was either black or blue colored. Bowtie-shaped quick response (QR) code carriers were then obtained from laser engraving and cutting, before thermo-mechanical functionalization (programming) was applied to stabilize the PEU in a thermo-responsive (switchable) state. The stability of the dye within the polymer surface and long-term functionality of the polymer were investigated against UVA and hydrolytic ageing. Spectrophotometric investigations verified UVA ageing-related color shifts from black to yellow-brownish and blue to petrol-greenish whereas hydrolytically aged samples changed from black to greenish and blue to light blue. In the case of UVA ageing, color changes were accompanied by dye decolorization, whereas hydrolytic ageing led to contrast declines due to dye diffusion. The Michelson contrast could be identified as an effective tool to follow ageing-related contrast changes between surface-dyed and laser-ablated (undyed) polymer regions. As soon as the Michelson contrast fell below a crucial value of 0.1 due to ageing, the QR code was no longer decipherable with a scanning device. Remarkably, the PEU information carrier base material could even then be adequately fixed and recovered. Hence, the surface contrast turned out to be the decisive parameter for QR code carrier applicability.

  11. Phenotypic Graphs and Evolution Unfold the Standard Genetic Code as the Optimal

    Science.gov (United States)

    Zamudio, Gabriel S.; José, Marco V.

    2018-03-01

    In this work, we explicitly consider the evolution of the Standard Genetic Code (SGC) by assuming two evolutionary stages, to wit, the primeval RNY code and two intermediate codes in between. We used network theory and graph theory to measure the connectivity of each phenotypic graph. The connectivity values are compared to the values of the codes under different randomization scenarios. An error-correcting optimal code is one in which the algebraic connectivity is minimized. We show that the SGC is optimal in regard to its robustness and error-tolerance when compared to all random codes under different assumptions.

  12. Non-Protein Coding RNAs

    CERN Document Server

    Walter, Nils G; Batey, Robert T

    2009-01-01

    This book assembles chapters from experts in the Biophysics of RNA to provide a broadly accessible snapshot of the current status of this rapidly expanding field. The 2006 Nobel Prize in Physiology or Medicine was awarded to the discoverers of RNA interference, highlighting just one example of a large number of non-protein coding RNAs. Because non-protein coding RNAs outnumber protein coding genes in mammals and other higher eukaryotes, it is now thought that the complexity of organisms is correlated with the fraction of their genome that encodes non-protein coding RNAs. Essential biological processes as diverse as cell differentiation, suppression of infecting viruses and parasitic transposons, higher-level organization of eukaryotic chromosomes, and gene expression itself are found to largely be directed by non-protein coding RNAs. The biophysical study of these RNAs employs X-ray crystallography, NMR, ensemble and single molecule fluorescence spectroscopy, optical tweezers, cryo-electron microscopy, and ot...

  13. Context quantization by minimum adaptive code length

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Wu, Xiaolin

    2007-01-01

    Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols....

  14. The histone codes for meiosis.

    Science.gov (United States)

    Wang, Lina; Xu, Zhiliang; Khawar, Muhammad Babar; Liu, Chao; Li, Wei

    2017-09-01

    Meiosis is a specialized process that produces haploid gametes from diploid cells by a single round of DNA replication followed by two successive cell divisions. It contains many special events, such as programmed DNA double-strand break (DSB) formation, homologous recombination, crossover formation and resolution. These events are associated with dynamically regulated chromosomal structures, the dynamic transcriptional regulation and chromatin remodeling are mainly modulated by histone modifications, termed 'histone codes'. The purpose of this review is to summarize the histone codes that are required for meiosis during spermatogenesis and oogenesis, involving meiosis resumption, meiotic asymmetric division and other cellular processes. We not only systematically review the functional roles of histone codes in meiosis but also discuss future trends and perspectives in this field. © 2017 Society for Reproduction and Fertility.

  15. The θ-γ neural code.

    Science.gov (United States)

    Lisman, John E; Jensen, Ole

    2013-03-20

    Theta and gamma frequency oscillations occur in the same brain regions and interact with each other, a process called cross-frequency coupling. Here, we review evidence for the following hypothesis: that the dual oscillations form a code for representing multiple items in an ordered way. This form of coding has been most clearly demonstrated in the hippocampus, where different spatial information is represented in different gamma subcycles of a theta cycle. Other experiments have tested the functional importance of oscillations and their coupling. These involve correlation of oscillatory properties with memory states, correlation with memory performance, and effects of disrupting oscillations on memory. Recent work suggests that this coding scheme coordinates communication between brain regions and is involved in sensory as well as memory processes. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. MicroHH 1.0: a computational fluid dynamics code for direct numerical simulation and large-eddy simulation of atmospheric boundary layer flows

    Science.gov (United States)

    van Heerwaarden, Chiel C.; van Stratum, Bart J. H.; Heus, Thijs; Gibbs, Jeremy A.; Fedorovich, Evgeni; Mellado, Juan Pedro

    2017-08-01

    This paper describes MicroHH 1.0, a new and open-source (www.microhh.org) computational fluid dynamics code for the simulation of turbulent flows in the atmosphere. It is primarily made for direct numerical simulation but also supports large-eddy simulation (LES). The paper covers the description of the governing equations, their numerical implementation, and the parameterizations included in the code. Furthermore, the paper presents the validation of the dynamical core in the form of convergence and conservation tests, and comparison of simulations of channel flows and slope flows against well-established test cases. The full numerical model, including the associated parameterizations for LES, has been tested for a set of cases under stable and unstable conditions, under the Boussinesq and anelastic approximations, and with dry and moist convection under stationary and time-varying boundary conditions. The paper presents performance tests showing good scaling from 256 to 32 768 processes. The graphical processing unit (GPU)-enabled version of the code can reach a speedup of more than an order of magnitude for simulations that fit in the memory of a single GPU.

  17. Feedback equivalence of convolutional codes over finite rings

    Directory of Open Access Journals (Sweden)

    DeCastro-García Noemí

    2017-12-01

    Full Text Available The approach to convolutional codes from the linear systems point of view provides us with effective tools in order to construct convolutional codes with adequate properties that let us use them in many applications. In this work, we have generalized feedback equivalence between families of convolutional codes and linear systems over certain rings, and we show that every locally Brunovsky linear system may be considered as a representation of a code under feedback convolutional equivalence.

  18. Reliability of cause of death coding: an international comparison.

    Science.gov (United States)

    Antini, Carmen; Rajs, Danuta; Muñoz-Quezada, María Teresa; Mondaca, Boris Andrés Lucero; Heiss, Gerardo

    2015-07-01

    This study evaluates the agreement of nosologic coding of cardiovascular causes of death between a Chilean coder and one in the United States, in a stratified random sample of death certificates of persons aged ≥ 60, issued in 2008 in the Valparaíso and Metropolitan regions, Chile. All causes of death were converted to ICD-10 codes in parallel by both coders. Concordance was analyzed with inter-coder agreement and Cohen's kappa coefficient by level of specification ICD-10 code for the underlying cause and the total causes of death coding. Inter-coder agreement was 76.4% for all causes of death and 80.6% for the underlying cause (agreement at the four-digit level), with differences by the level of specification of the ICD-10 code, by line of the death certificate, and by number of causes of death per certificate. Cohen's kappa coefficient was 0.76 (95%CI: 0.68-0.84) for the underlying cause and 0.75 (95%CI: 0.74-0.77) for the total causes of death. In conclusion, causes of death coding and inter-coder agreement for cardiovascular diseases in two regions of Chile are comparable to an external benchmark and with reports from other countries.

  19. Polarization diversity scheme on spectral polarization coding optical code-division multiple-access network

    Science.gov (United States)

    Yen, Chih-Ta; Huang, Jen-Fa; Chang, Yao-Tang; Chen, Bo-Hau

    2010-12-01

    We present an experiment demonstrating the spectral-polarization coding optical code-division multiple-access system introduced with a nonideal state of polarization (SOP) matching conditions. In the proposed system, the encoding and double balanced-detection processes are implemented using a polarization-diversity scheme. Because of the quasiorthogonality of Hadamard codes combining with array waveguide grating routers and a polarization beam splitter, the proposed codec pair can encode-decode multiple code words of Hadamard code while retaining the ability for multiple-access interference cancellation. The experimental results demonstrate that when the system is maintained with an orthogonal SOP for each user, an effective reduction in the phase-induced intensity noise is obtained. The analytical SNR values are found to overstate the experimental results by around 2 dB when the received effective power is large. This is mainly limited by insertion losses of components and a nonflattened optical light source. Furthermore, the matching conditions can be improved by decreasing nonideal influences.

  20. THE LEGAL STATUS OF COMPANIES UNDER THE NEW CIVIL CODE

    Directory of Open Access Journals (Sweden)

    Lucian Bernd SĂULEANU

    2017-10-01

    Full Text Available The new Civil Code sets provisions regarding the liability of shareholders, organization and functioning of legal entity, annulment of documents issued by the management bodies of the legal entity, company contract, regime of contributions, company types, simple partnership, unlimited, simple limited partnership, with limited liability, joint stock, partnership limited by shares, cooperatives, other type of company.

  1. Running codes through the web

    International Nuclear Information System (INIS)

    Clark, R.E.H.

    2001-01-01

    Dr. Clark presented a report and demonstration of running atomic physics codes through the WWW. The atomic physics data is generated from Los Alamos National Laboratory (LANL) codes that calculate electron impact excitation, ionization, photoionization, and autoionization, and inversed processes through detailed balance. Samples of Web interfaces, input and output are given in the report

  2. The Development of the World Anti-Doping Code.

    Science.gov (United States)

    Young, Richard

    2017-01-01

    This chapter addresses both the development and substance of the World Anti-Doping Code, which came into effect in 2003, as well as the subsequent Code amendments, which came into effect in 2009 and 2015. Through an extensive process of stakeholder input and collaboration, the World Anti-Doping Code has transformed the hodgepodge of inconsistent and competing pre-2003 anti-doping rules into a harmonized and effective approach to anti-doping. The Code, as amended, is now widely recognized worldwide as the gold standard in anti-doping. The World Anti-Doping Code originally went into effect on January 1, 2004. The first amendments to the Code went into effect on January 1, 2009, and the second amendments on January 1, 2015. The Code and the related international standards are the product of a long and collaborative process designed to make the fight against doping more effective through the adoption and implementation of worldwide harmonized rules and best practices. © 2017 S. Karger AG, Basel.

  3. Basic concept of common reactor physics code systems. Final report of working party on common reactor physics code systems (CCS)

    International Nuclear Information System (INIS)

    2004-03-01

    A working party was organized for two years (2001-2002) on common reactor physics code systems under the Research Committee on Reactor Physics of JAERI. This final report is compilation of activity of the working party on common reactor physics code systems during two years. Objectives of the working party is to clarify basic concept of common reactor physics code systems to improve convenience of reactor physics code systems for reactor physics researchers in Japan on their various field of research and development activities. We have held four meetings during 2 years, investigated status of reactor physics code systems and innovative software technologies, and discussed basic concept of common reactor physics code systems. (author)

  4. Energy information data base: report number codes

    Energy Technology Data Exchange (ETDEWEB)

    None

    1979-09-01

    Each report processed by the US DOE Technical Information Center is identified by a unique report number consisting of a code plus a sequential number. In most cases, the code identifies the originating installation. In some cases, it identifies a specific program or a type of publication. Listed in this publication are all codes that have been used by DOE in cataloging reports. This compilation consists of two parts. Part I is an alphabetical listing of report codes identified with the issuing installations that have used the codes. Part II is an alphabetical listing of installations identified with codes each has used. (RWR)

  5. Energy information data base: report number codes

    International Nuclear Information System (INIS)

    1979-09-01

    Each report processed by the US DOE Technical Information Center is identified by a unique report number consisting of a code plus a sequential number. In most cases, the code identifies the originating installation. In some cases, it identifies a specific program or a type of publication. Listed in this publication are all codes that have been used by DOE in cataloging reports. This compilation consists of two parts. Part I is an alphabetical listing of report codes identified with the issuing installations that have used the codes. Part II is an alphabetical listing of installations identified with codes each has used

  6. Performance measures for transform data coding.

    Science.gov (United States)

    Pearl, J.; Andrews, H. C.; Pratt, W. K.

    1972-01-01

    This paper develops performance criteria for evaluating transform data coding schemes under computational constraints. Computational constraints that conform with the proposed basis-restricted model give rise to suboptimal coding efficiency characterized by a rate-distortion relation R(D) similar in form to the theoretical rate-distortion function. Numerical examples of this performance measure are presented for Fourier, Walsh, Haar, and Karhunen-Loeve transforms.

  7. Signalign: An Ontology of DNA as Signal for Comparative Gene Structure Prediction Using Information-Coding-and-Processing Techniques.

    Science.gov (United States)

    Yu, Ning; Guo, Xuan; Gu, Feng; Pan, Yi

    2016-03-01

    Conventional character-analysis-based techniques in genome analysis manifest three main shortcomings-inefficiency, inflexibility, and incompatibility. In our previous research, a general framework, called DNA As X was proposed for character-analysis-free techniques to overcome these shortcomings, where X is the intermediates, such as digit, code, signal, vector, tree, graph network, and so on. In this paper, we further implement an ontology of DNA As Signal, by designing a tool named Signalign for comparative gene structure analysis, in which DNA sequences are converted into signal series, processed by modified method of dynamic time warping and measured by signal-to-noise ratio (SNR). The ontology of DNA As Signal integrates the principles and concepts of other disciplines including information coding theory and signal processing into sequence analysis and processing. Comparing with conventional character-analysis-based methods, Signalign can not only have the equivalent or superior performance, but also enrich the tools and the knowledge library of computational biology by extending the domain from character/string to diverse areas. The evaluation results validate the success of the character-analysis-free technique for improved performances in comparative gene structure prediction.

  8. Abstraction carrying code and resource-awareness

    OpenAIRE

    Hermenegildo, Manuel V.; Albert Albiol, Elvira; López García, Pedro; Puebla Sánchez, Alvaro Germán

    2005-01-01

    Proof-Carrying Code (PCC) is a general approach to mobile code safety in which the code supplier augments the program with a certifícate (or proof). The intended benefit is that the program consumer can locally validate the certifícate w.r.t. the "untrusted" program by means of a certifícate checker—a process which should be much simpler, eíñcient, and automatic than generating the original proof. Abstraction Carrying Code (ACC) is an enabling technology for PCC in which an abstract mod...

  9. Independent peer review of nuclear safety computer codes

    International Nuclear Information System (INIS)

    Boyack, B.E.; Jenks, R.P.

    1993-01-01

    A structured, independent computer code peer-review process has been developed to assist the US Nuclear Regulatory Commission (NRC) and the US Department of Energy in their nuclear safety missions. This paper describes a structured process of independent code peer review, benefits associated with a code-independent peer review, as well as the authors' recent peer-review experience. The NRC adheres to the principle that safety of plant design, construction, and operation are the responsibility of the licensee. Nevertheless, NRC staff must have the ability to independently assess plant designs and safety analyses submitted by license applicants. According to Ref. 1, open-quotes this requires that a sound understanding be obtained of the important physical phenomena that may occur during transients in operating power plants.close quotes The NRC concluded that computer codes are the principal products to open-quotes understand and predict plant response to deviations from normal operating conditionsclose quotes and has developed several codes for that purpose. However, codes cannot be used blindly; they must be assessed and found adequate for the purposes they are intended. A key part of the qualification process can be accomplished through code peer reviews; this approach has been adopted by the NRC

  10. Radionuclide daughter inventory generator code: DIG

    International Nuclear Information System (INIS)

    Fields, D.E.; Sharp, R.D.

    1985-09-01

    The Daughter Inventory Generator (DIG) code accepts a tabulation of radionuclide initially present in a waste stream, specified as amounts present either by mass or by activity, and produces a tabulation of radionuclides present after a user-specified elapsed time. This resultant radionuclide inventory characterizes wastes that have undergone daughter ingrowth during subsequent processes, such as leaching and transport, and includes daughter radionuclides that should be considered in these subsequent processes or for inclusion in a pollutant source term. Output of the DIG code also summarizes radionuclide decay constants. The DIG code was developed specifically to assist the user of the PRESTO-II methodology and code in preparing data sets and accounting for possible daughter ingrowth in wastes buried in shallow-land disposal areas. The DIG code is also useful in preparing data sets for the PRESTO-EPA code. Daughter ingrowth in buried radionuclides and in radionuclides that have been leached from the wastes and are undergoing hydrologic transport are considered, and the quantities of daughter radionuclide are calculated. Radionuclide decay constants generated by DIG and included in the DIG output are required in the PRESTO-II code input data set. The DIG accesses some subroutines written for use with the CRRIS system and accesses files containing radionuclide data compiled by D.C. Kocher. 11 refs

  11. Identification of coding and non-coding mutational hotspots in cancer genomes.

    Science.gov (United States)

    Piraino, Scott W; Furney, Simon J

    2017-01-05

    The identification of mutations that play a causal role in tumour development, so called "driver" mutations, is of critical importance for understanding how cancers form and how they might be treated. Several large cancer sequencing projects have identified genes that are recurrently mutated in cancer patients, suggesting a role in tumourigenesis. While the landscape of coding drivers has been extensively studied and many of the most prominent driver genes are well characterised, comparatively less is known about the role of mutations in the non-coding regions of the genome in cancer development. The continuing fall in genome sequencing costs has resulted in a concomitant increase in the number of cancer whole genome sequences being produced, facilitating systematic interrogation of both the coding and non-coding regions of cancer genomes. To examine the mutational landscapes of tumour genomes we have developed a novel method to identify mutational hotspots in tumour genomes using both mutational data and information on evolutionary conservation. We have applied our methodology to over 1300 whole cancer genomes and show that it identifies prominent coding and non-coding regions that are known or highly suspected to play a role in cancer. Importantly, we applied our method to the entire genome, rather than relying on predefined annotations (e.g. promoter regions) and we highlight recurrently mutated regions that may have resulted from increased exposure to mutational processes rather than selection, some of which have been identified previously as targets of selection. Finally, we implicate several pan-cancer and cancer-specific candidate non-coding regions, which could be involved in tumourigenesis. We have developed a framework to identify mutational hotspots in cancer genomes, which is applicable to the entire genome. This framework identifies known and novel coding and non-coding mutional hotspots and can be used to differentiate candidate driver regions from

  12. Performance analysis of multiple interference suppression over asynchronous/synchronous optical code-division multiple-access system based on complementary/prime/shifted coding scheme

    Science.gov (United States)

    Nieh, Ta-Chun; Yang, Chao-Chin; Huang, Jen-Fa

    2011-08-01

    A complete complementary/prime/shifted prime (CPS) code family for the optical code-division multiple-access (OCDMA) system is proposed. Based on the ability of complete complementary (CC) code, the multiple-access interference (MAI) can be suppressed and eliminated via spectral amplitude coding (SAC) OCDMA system under asynchronous/synchronous transmission. By utilizing the shifted prime (SP) code in the SAC scheme, the hardware implementation of encoder/decoder can be simplified with a reduced number of optical components, such as arrayed waveguide grating (AWG) and fiber Bragg grating (FBG). This system has a superior performance as compared to previous bipolar-bipolar coding OCDMA systems.

  13. Development and feasibility testing of the Pediatric Emergency Discharge Interaction Coding Scheme.

    Science.gov (United States)

    Curran, Janet A; Taylor, Alexandra; Chorney, Jill; Porter, Stephen; Murphy, Andrea; MacPhee, Shannon; Bishop, Andrea; Haworth, Rebecca

    2017-08-01

    Discharge communication is an important aspect of high-quality emergency care. This study addresses the gap in knowledge on how to describe discharge communication in a paediatric emergency department (ED). The objective of this feasibility study was to develop and test a coding scheme to characterize discharge communication between health-care providers (HCPs) and caregivers who visit the ED with their children. The Pediatric Emergency Discharge Interaction Coding Scheme (PEDICS) and coding manual were developed following a review of the literature and an iterative refinement process involving HCP observations, inter-rater assessments and team consensus. The coding scheme was pilot-tested through observations of HCPs across a range of shifts in one urban paediatric ED. Overall, 329 patient observations were carried out across 50 observational shifts. Inter-rater reliability was evaluated in 16% of the observations. The final version of the PEDICS contained 41 communication elements. Kappa scores were greater than .60 for the majority of communication elements. The most frequently observed communication elements were under the Introduction node and the least frequently observed were under the Social Concerns node. HCPs initiated the majority of the communication. Pediatric Emergency Discharge Interaction Coding Scheme addresses an important gap in the discharge communication literature. The tool is useful for mapping patterns of discharge communication between HCPs and caregivers. Results from our pilot test identified deficits in specific areas of discharge communication that could impact adherence to discharge instructions. The PEDICS would benefit from further testing with a different sample of HCPs. © 2017 The Authors. Health Expectations Published by John Wiley & Sons Ltd.

  14. Error analysis of supercritical water correlations using ATHLET system code under DHT conditions

    Energy Technology Data Exchange (ETDEWEB)

    Samuel, J., E-mail: jeffrey.samuel@uoit.ca [Univ. of Ontario Inst. of Tech., Oshawa, ON (Canada)

    2014-07-01

    The thermal-hydraulic computer code ATHLET (Analysis of THermal-hydraulics of LEaks and Transients) is used for analysis of anticipated and abnormal plant transients, including safety analysis of Light Water Reactors (LWRs) and Russian Graphite-Moderated High Power Channel-type Reactors (RBMKs). The range of applicability of ATHLET has been extended to supercritical water by updating the fluid-and transport-properties packages, thus enabling the code to the used in analysis of SuperCritical Water-cooled Reactors (SCWRs). Several well-known heat-transfer correlations for supercritical fluids were added to the ATHLET code and a numerical model was created to represent an experimental test section. In this work, the error in the Heat Transfer Coefficient (HTC) calculation by the ATHLET model is studied along with the ability of the various correlations to predict different heat transfer regimes. (author)

  15. Verification of thermal-hydraulic computer codes against standard problems for WWER reflooding

    International Nuclear Information System (INIS)

    Alexander D Efanov; Vladimir N Vinogradov; Victor V Sergeev; Oleg A Sudnitsyn

    2005-01-01

    Full text of publication follows: The computational assessment of reactor core components behavior under accident conditions is impossible without knowledge of the thermal-hydraulic processes occurring in this case. The adequacy of the results obtained using the computer codes to the real processes is verified by carrying out a number of standard problems. In 2000-2003, the fulfillment of three Russian standard problems on WWER core reflooding was arranged using the experiments on full-height electrically heated WWER 37-rod bundle model cooldown in regimes of bottom (SP-1), top (SP-2) and combined (SP-3) reflooding. The representatives from the eight MINATOM's organizations took part in this work, in the course of which the 'blind' and posttest calculations were performed using various versions of the RELAP5, ATHLET, CATHARE, COBRA-TF, TRAP, KORSAR computer codes. The paper presents a brief description of the test facility, test section, test scenarios and conditions as well as the basic results of computational analysis of the experiments. The analysis of the test data revealed a significantly non-one-dimensional nature of cooldown and rewetting of heater rods heated up to a high temperature in a model bundle. This was most pronounced at top and combined reflooding. The verification of the model reflooding computer codes showed that most of computer codes fairly predict the peak rod temperature and the time of bundle cooldown. The exception is provided by the results of calculations with the ATHLET and CATHARE codes. The nature and rate of rewetting front advance in the lower half of the bundle are fairly predicted practically by all computer codes. The disagreement between the calculations and experimental results for the upper half of the bundle is caused by the difficulties of computational simulation of multidimensional effects by 1-D computer codes. In this regard, a quasi-two-dimensional computer code COBRA-TF offers certain advantages. Overall, the closest

  16. Coupled geochemical and solute transport code development

    International Nuclear Information System (INIS)

    Morrey, J.R.; Hostetler, C.J.

    1985-01-01

    A number of coupled geochemical hydrologic codes have been reported in the literature. Some of these codes have directly coupled the source-sink term to the solute transport equation. The current consensus seems to be that directly coupling hydrologic transport and chemical models through a series of interdependent differential equations is not feasible for multicomponent problems with complex geochemical processes (e.g., precipitation/dissolution reactions). A two-step process appears to be the required method of coupling codes for problems where a large suite of chemical reactions must be monitored. Two-step structure requires that the source-sink term in the transport equation is supplied by a geochemical code rather than by an analytical expression. We have developed a one-dimensional two-step coupled model designed to calculate relatively complex geochemical equilibria (CTM1D). Our geochemical module implements a Newton-Raphson algorithm to solve heterogeneous geochemical equilibria, involving up to 40 chemical components and 400 aqueous species. The geochemical module was designed to be efficient and compact. A revised version of the MINTEQ Code is used as a parent geochemical code

  17. Towards a European code of medical ethics. Ethical and legal issues.

    Science.gov (United States)

    Patuzzo, Sara; Pulice, Elisabetta

    2017-01-01

    The feasibility of a common European code of medical ethics is discussed, with consideration and evaluation of the difficulties such a project is going to face, from both the legal and ethical points of view. On the one hand, the analysis will underline the limits of a common European code of medical ethics as an instrument for harmonising national professional rules in the European context; on the other hand, we will highlight some of the potentials of this project, which could be increased and strengthened through a proper rulemaking process and through adequate and careful choice of content. We will also stress specific elements and devices that should be taken into consideration during the establishment of the code, from both procedural and content perspectives. Regarding methodological issues, the limits and potentialities of a common European code of medical ethics will be analysed from an ethical point of view and then from a legal perspective. The aim of this paper is to clarify the framework for the potential but controversial role of the code in the European context, showing the difficulties in enforcing and harmonising national ethical rules into a European code of medical ethics. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  18. Simplified modeling and code usage in the PASC-3 code system by the introduction of a programming environment

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Oppe, J.; Oudshoorn, H.L.; Slobben, J.

    1991-06-01

    A brief description is given of the PASC-3 (Petten-AMPX-SCALE) Reactor Physics code system and associated UNIPASC work environment. The PASC-3 code system is used for criticality and reactor calculations and consists of a selection from the Oak Ridge National Laboratory AMPX-SCALE-3 code collection complemented with a number of additional codes and nuclear data bases. The original codes have been adapted to run under the UNIX operating system. The recommended nuclear data base is a complete 219 group cross section library derived from JEF-1 of which some benchmark results are presented. By the addition of the UNIPASC work environment the usage of the code system is greatly simplified. Complex chains of programs can easily be coupled together to form a single job. In addition, the model parameters can be represented by variables instead of literal values which enhances the readability and may improve the integrity of the code inputs. (author). 8 refs.; 6 figs.; 1 tab

  19. Processing module operating methods, processing modules, and communications systems

    Science.gov (United States)

    McCown, Steven Harvey; Derr, Kurt W.; Moore, Troy

    2014-09-09

    A processing module operating method includes using a processing module physically connected to a wireless communications device, requesting that the wireless communications device retrieve encrypted code from a web site and receiving the encrypted code from the wireless communications device. The wireless communications device is unable to decrypt the encrypted code. The method further includes using the processing module, decrypting the encrypted code, executing the decrypted code, and preventing the wireless communications device from accessing the decrypted code. Another processing module operating method includes using a processing module physically connected to a host device, executing an application within the processing module, allowing the application to exchange user interaction data communicated using a user interface of the host device with the host device, and allowing the application to use the host device as a communications device for exchanging information with a remote device distinct from the host device.

  20. An Efficient Method for Verifying Gyrokinetic Microstability Codes

    Science.gov (United States)

    Bravenec, R.; Candy, J.; Dorland, W.; Holland, C.

    2009-11-01

    Benchmarks for gyrokinetic microstability codes can be developed through successful ``apples-to-apples'' comparisons among them. Unlike previous efforts, we perform the comparisons for actual discharges, rendering the verification efforts relevant to existing experiments and future devices (ITER). The process requires i) assembling the experimental analyses at multiple times, radii, discharges, and devices, ii) creating the input files ensuring that the input parameters are faithfully translated code-to-code, iii) running the codes, and iv) comparing the results, all in an organized fashion. The purpose of this work is to automate this process as much as possible: At present, a python routine is used to generate and organize GYRO input files from TRANSP or ONETWO analyses. Another routine translates the GYRO input files into GS2 input files. (Translation software for other codes has not yet been written.) Other python codes submit the multiple GYRO and GS2 jobs, organize the results, and collect them into a table suitable for plotting. (These separate python routines could easily be consolidated.) An example of the process -- a linear comparison between GYRO and GS2 for a DIII-D discharge at multiple radii -- will be presented.

  1. Development of Mathematical Model and Analysis Code for Estimating Drop Behavior of the Control Rod Assembly in the Sodium Cooled Fast Reactor

    International Nuclear Information System (INIS)

    Oh, Se-Hong; Kang, SeungHoon; Choi, Choengryul; Yoon, Kyung Ho; Cheon, Jin Sik

    2016-01-01

    On receiving the scram signal, the control rod assemblies are released to fall into the reactor core by its weight. Thus drop time and falling velocity of the control rod assembly must be estimated for the safety evaluation. There are three typical ways to estimate the drop behavior of the control rod assembly in scram action: Experimental, numerical and theoretical methods. But experimental and numerical(CFD) method require a lot of cost and time. Thus, these methods are difficult to apply to the initial design process. In this study, mathematical model and theoretical analysis code have been developed in order to estimate drop behavior of the control rod assembly to provide the underlying data for the design optimization. Mathematical model and theoretical analysis code have been developed in order to estimate drop behavior of the control rod assembly to provide the underlying data for the design optimization. A simplified control rod assembly model is considered to minimize the uncertainty in the development process. And the hydraulic circuit analysis technique is adopted to evaluate the internal/external flow distribution of the control rod assembly. Finally, the theoretical analysis code(named as HEXCON) has been developed based on the mathematical model. To verify the reliability of the developed code, CFD analysis has been conducted. And a calculation using the developed analysis code was carried out under the same condition, and both results were compared

  2. Behavioral processes underlying the decline of narcissists' popularity over time.

    Science.gov (United States)

    Leckelt, Marius; Küfner, Albrecht C P; Nestler, Steffen; Back, Mitja D

    2015-11-01

    Following a dual-pathway approach to the social consequences of grandiose narcissism, we investigated the behavioral processes underlying (a) the decline of narcissists' popularity in social groups over time and (b) how this is differentially influenced by the 2 narcissism facets admiration and rivalry. In a longitudinal laboratory study, participants (N = 311) first provided narcissism self-reports using the Narcissistic Personality Inventory and the Narcissistic Admiration and Rivalry Questionnaire, and subsequently interacted with each other in small groups in weekly sessions over the course of 3 weeks. All sessions were videotaped and trained raters coded participants' behavior during the interactions. Within the sessions participants provided mutual ratings on assertiveness, untrustworthiness, and likability. Results showed that (a) over time narcissists become less popular and (b) this is reflected in an initially positive but decreasing effect of narcissistic admiration as well as an increasing negative effect of narcissistic rivalry. As hypothesized, these patterns of results could be explained by means of 2 diverging behavioral pathways: The negative narcissistic pathway (i.e., arrogant-aggressive behavior and being seen as untrustworthy) plays an increasing role and is triggered by narcissistic rivalry, whereas the relevance of the positive narcissistic pathway (i.e., dominant-expressive behavior and being seen as assertive) triggered by narcissistic admiration decreases over time. These findings underline the utility of a behavioral pathway approach for disentangling the complex effects of personality on social outcomes. (c) 2015 APA, all rights reserved).

  3. Iterative nonlinear unfolding code: TWOGO

    International Nuclear Information System (INIS)

    Hajnal, F.

    1981-03-01

    a new iterative unfolding code, TWOGO, was developed to analyze Bonner sphere neutron measurements. The code includes two different unfolding schemes which alternate on successive iterations. The iterative process can be terminated either when the ratio of the coefficient of variations in terms of the measured and calculated responses is unity, or when the percentage difference between the measured and evaluated sphere responses is less than the average measurement error. The code was extensively tested with various known spectra and real multisphere neutron measurements which were performed inside the containments of pressurized water reactors

  4. Reasons for Adopting or Revising a Journalism Ethics Code: The Case of Three Ethics Codes in the Netherlands

    OpenAIRE

    Poler Kovačič, Melita; van Putten, Anne-Marie

    2011-01-01

    The authors of this article approached the dilemma of whether or not a universal code of journalism ethics should be drafted based on the existence of factors prompting the need for a new ethics code in a national environment. Semi-structured interviews were performed with the key persons involved in the process of drafting or revising three ethics codes in the Netherlands from 2007 onwards: the Journalism Guideline by the Press Council, the Journalism Code by the Society of Chief-Editors and...

  5. 78 FR 9678 - Multi-stakeholder Process To Develop a Voluntary Code of Conduct for Smart Grid Data Privacy

    Science.gov (United States)

    2013-02-11

    ... providing consumer energy use services. DATES: Tuesday, February 26, 2013 (9:30 a.m. to 4:30 p.m., Eastern... Privacy and Promoting Innovation in the Global Digital Economy \\2\\ (Privacy Blueprint). The Privacy Blueprint outlines a multi-stakeholder process for developing voluntary codes of conduct that, if adopted by...

  6. Power feedback effects in the LEM code

    International Nuclear Information System (INIS)

    Kromar, M.

    1999-01-01

    The nodal diffusion code LEM has been extended with the power feedback option. Thermohydraulic and neutronic coupling is covered with the Reactivity Coefficient Method. Presented are results of the code testing. Verification is done on the typical non-uprated NPP Krsko reload cycles. Results show that the code fulfill objectives arising in the process of reactor core analysis.(author)

  7. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  8. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  9. SIMMER-II code and its applications

    International Nuclear Information System (INIS)

    Smith, L.L.

    1979-01-01

    The significant features of SIMMER-II, a disrupted-core analysis code, are described. The code has the capabalities to begin space-time neutronics calculations from nonstationary reactor states, to track the intermixing of fuel of different enrichments, and to model the complicated heat- and mass-transfer processes that occur in the transition phase. Example calculations are presented for analyses of whole-core accidents and for analyses of experiments supporting the code models

  10. The Italian experience on T/H best estimate codes: Achievements and perspectives

    Energy Technology Data Exchange (ETDEWEB)

    Alemberti, A.; D`Auria, F.; Fiorino, E. [and others

    1997-07-01

    Themalhydraulic system codes are complex tools developed to simulate the power plants behavior during off-normal conditions. Among the objectives of the code calculations the evaluation of safety margins, the operator training, the optimization of the plant design and of the emergency operating procedures, are mostly considered in the field of the nuclear safety. The first generation of codes was developed in the United States at the end of `60s. Since that time, different research groups all over the world started the development of their own codes. At the beginning of the `80s, the second generation codes were proposed; these differ from the first generation codes owing to the number of balance equations solved (six instead of three), the sophistication of the constitutive models and of the adopted numerics. The capabilities of available computers have been fully exploited during the years. The authors then summarize some of the major steps in the process of developing, modifying, and advancing the capabilities of the codes. They touch on the fact that Italian, and for that matter non-American, researchers have not been intimately involved in much of this work. They then describe the application of these codes in Italy, even though there are no operating or under construction nuclear power plants at this time. Much of this effort is directed at the general question of plant safety in the face of transient type events.

  11. The Italian experience on T/H best estimate codes: Achievements and perspectives

    International Nuclear Information System (INIS)

    Alemberti, A.; D'Auria, F.; Fiorino, E.

    1997-01-01

    Themalhydraulic system codes are complex tools developed to simulate the power plants behavior during off-normal conditions. Among the objectives of the code calculations the evaluation of safety margins, the operator training, the optimization of the plant design and of the emergency operating procedures, are mostly considered in the field of the nuclear safety. The first generation of codes was developed in the United States at the end of '60s. Since that time, different research groups all over the world started the development of their own codes. At the beginning of the '80s, the second generation codes were proposed; these differ from the first generation codes owing to the number of balance equations solved (six instead of three), the sophistication of the constitutive models and of the adopted numerics. The capabilities of available computers have been fully exploited during the years. The authors then summarize some of the major steps in the process of developing, modifying, and advancing the capabilities of the codes. They touch on the fact that Italian, and for that matter non-American, researchers have not been intimately involved in much of this work. They then describe the application of these codes in Italy, even though there are no operating or under construction nuclear power plants at this time. Much of this effort is directed at the general question of plant safety in the face of transient type events

  12. ASPECTS CONCERNING THE JOINT VENTURE UNDER THE REGULATION OF THE NEW CIVIL CODE

    Directory of Open Access Journals (Sweden)

    Ana-Maria Lupulescu

    2013-11-01

    Full Text Available The New Civil Code makes the transition, for the first time in the Romanian legal system, from the duality to the unity of private law. Consequently, the Civil Code contains a legal regulation more structured and comprehensive, although not entirely safe from any criticism, in relation to the company, with particular reference to the simple company, regulation that expressly characterizes itself as the common law in this field. Within these general provisions, the legislator has considered the joint venture, to which, however, as in the previous regulation contained in the old Commercial Code – now repealed –, it does not devote too many legal provisions, in order to maintain the flexibility of this form of company. Therefore, this approach appears particularly useful for analysts in law and, especially, for practitioners, since it aims to achieve a comprehensive analysis of the joint venture, form of company with practical incidence.

  13. Review of SKB's Code Documentation and Testing

    International Nuclear Information System (INIS)

    Hicks, T.W.

    2005-01-01

    SKB is in the process of developing the SR-Can safety assessment for a KBS 3 repository. The assessment will be based on quantitative analyses using a range of computational codes aimed at developing an understanding of how the repository system will evolve. Clear and comprehensive code documentation and testing will engender confidence in the results of the safety assessment calculations. This report presents the results of a review undertaken on behalf of SKI aimed at providing an understanding of how codes used in the SR 97 safety assessment and those planned for use in the SR-Can safety assessment have been documented and tested. Having identified the codes us ed by SKB, several codes were selected for review. Consideration was given to codes used directly in SKB's safety assessment calculations as well as to some of the less visible codes that are important in quantifying the different repository barrier safety functions. SKB's documentation and testing of the following codes were reviewed: COMP23 - a near-field radionuclide transport model developed by SKB for use in safety assessment calculations. FARF31 - a far-field radionuclide transport model developed by SKB for use in safety assessment calculations. PROPER - SKB's harness for executing probabilistic radionuclide transport calculations using COMP23 and FARF31. The integrated analytical radionuclide transport model that SKB has developed to run in parallel with COMP23 and FARF31. CONNECTFLOW - a discrete fracture network model/continuum model developed by Serco Assurance (based on the coupling of NAMMU and NAPSAC), which SKB is using to combine hydrogeological modelling on the site and regional scales in place of the HYDRASTAR code. DarcyTools - a discrete fracture network model coupled to a continuum model, recently developed by SKB for hydrogeological modelling, also in place of HYDRASTAR. ABAQUS - a finite element material model developed by ABAQUS, Inc, which is used by SKB to model repository buffer

  14. 38 CFR 61.20 - Life Safety Code capital grants.

    Science.gov (United States)

    2010-07-01

    ... (CONTINUED) VA HOMELESS PROVIDERS GRANT AND PER DIEM PROGRAM § 61.20 Life Safety Code capital grants. (a) This section sets forth provisions for obtaining a Life Safety Code capital grant under 38 U.S.C. 2012... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Life Safety Code capital...

  15. (Almost) practical tree codes

    KAUST Repository

    Khina, Anatoly

    2016-08-15

    We consider the problem of stabilizing an unstable plant driven by bounded noise over a digital noisy communication link, a scenario at the heart of networked control. To stabilize such a plant, one needs real-time encoding and decoding with an error probability profile that decays exponentially with the decoding delay. The works of Schulman and Sahai over the past two decades have developed the notions of tree codes and anytime capacity, and provided the theoretical framework for studying such problems. Nonetheless, there has been little practical progress in this area due to the absence of explicit constructions of tree codes with efficient encoding and decoding algorithms. Recently, linear time-invariant tree codes were proposed to achieve the desired result under maximum-likelihood decoding. In this work, we take one more step towards practicality, by showing that these codes can be efficiently decoded using sequential decoding algorithms, up to some loss in performance (and with some practical complexity caveats). We supplement our theoretical results with numerical simulations that demonstrate the effectiveness of the decoder in a control system setting.

  16. Parallel Computing Characteristics of CUPID code under MPI and Hybrid environment

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Ryong; Yoon, Han Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeon, Byoung Jin; Choi, Hyoung Gwon [Seoul National Univ. of Science and Technology, Seoul (Korea, Republic of)

    2014-05-15

    In this paper, a characteristic of parallel algorithm is presented for solving an elliptic type equation of CUPID via domain decomposition method using the MPI and the parallel performance is estimated in terms of a scalability which shows the speedup ratio. In addition, the time-consuming pattern of major subroutines is studied. Two different grid systems are taken into account: 40,000 meshes for coarse system and 320,000 meshes for fine system. Since the matrix of the CUPID code differs according to whether the flow is single-phase or two-phase, the effect of matrix shape is evaluated. Finally, the effect of the preconditioner for matrix solver is also investigated. Finally, the hybrid (OpenMP+MPI) parallel algorithm is introduced and discussed in detail for solving pressure solver. Component-scale thermal-hydraulics code, CUPID has been developed for two-phase flow analysis, which adopts a three-dimensional, transient, three-field model, and parallelized to fulfill a recent demand for long-transient and highly resolved multi-phase flow behavior. In this study, the parallel performance of the CUPID code was investigated in terms of scalability. The CUPID code was parallelized with domain decomposition method. The MPI library was adopted to communicate the information at the neighboring domain. For managing the sparse matrix effectively, the CSR storage format is used. To take into account the characteristics of the pressure matrix which turns to be asymmetric for two-phase flow, both single-phase and two-phase calculations were run. In addition, the effect of the matrix size and preconditioning was also investigated. The fine mesh calculation shows better scalability than the coarse mesh because the number of coarse mesh does not need to decompose the computational domain excessively. The fine mesh can be present good scalability when dividing geometry with considering the ratio between computation and communication time. For a given mesh, single-phase flow

  17. Examination of the accuracy of coding hospital-acquired pressure ulcer stages.

    Science.gov (United States)

    Coomer, Nicole M; McCall, Nancy T

    2013-01-01

    Pressure ulcers (PU) are considered harmful conditions that are reasonably prevented if accepted standards of care are followed. They became subject to the payment adjustment for hospitalacquired conditions (HACs) beginning October 1, 2008. We examined several aspects of the accuracy of coding for pressure ulcers under the Medicare Hospital-Acquired Condition Present on Admission (HAC-POA) Program. We used the "4010" claim format as a basis of reference to show some of the issues of the old format, such as the underreporting of pressure ulcer stages on pressure ulcer claims and how the underreporting varied by hospital characteristics. We then used the rate of Stage III and IV pressure ulcer HACs reported in the Hospital Cost and Utilization Project State Inpatient Databases data to look at the sensitivity of PU HAC-POA coding to the number of diagnosis fields. We examined Medicare claims data for FYs 2009 and 2010 to examine the degree that the presence of stage codes were underreported on pressure ulcer claims. We selected all claims with a secondary diagnosis code of pressure ulcer site (ICD-9 diagnosis codes 707.00-707.09) that were not reported as POA (POA of "N" or "U"). We then created a binary indicator for the presence of any pressure ulcer stage diagnosis code. We examine the percentage of claims with a diagnosis of a pressure ulcer site code with no accompanying pressure ulcer stage code. Our results point to underreporting of PU stages under the "4010" format and that the reporting of stage codes varied across hospital type and location. Further, our results indicate that under the "5010" format, a higher number of pressure ulcer HACs can be expected to be reported and we should expect to encounter a larger percentage of pressure ulcers incorrectly coded as POA under the new format. The combination of the capture of 25 diagnosis codes under the new "5010" format and the change from ICD-9 to ICD-10 will likely alleviate the observed underreporting of

  18. Development and Application of a Code for Internal Exposure (CINEX) based on the CINDY code

    International Nuclear Information System (INIS)

    Kravchik, T.; Duchan, N.; Sarah, R.; Gabay, Y.; Kol, R.

    2004-01-01

    Internal exposure to radioactive materials at the NRCN is evaluated using the CINDY (Code for Internal Dosimetry) Package. The code was developed by the Pacific Northwest Laboratory to assist the interpretation of bioassay data, provide bioassay projections and evaluate committed and calendar-year doses from intake or bioassay measurement data. It provides capabilities to calculate organ dose and effective dose equivalents using the International Commission on Radiological Protection (ICRP) 30 approach. The CINDY code operates under DOS operating system and consequently its operation needs a relatively long procedure which also includes a lot of manual typing that can lead to personal human mistakes. A new code has been developed at the NRCN, the CINEX (Code for Internal Exposure), which is an Excel application and leads to a significant reduction in calculation time (in the order of 5-10 times) and in the risk of personal human mistakes. The code uses a database containing tables which were constructed by the CINDY and contain the bioassay values predicted by the ICRP30 model after an intake of an activity unit of each isotope. Using the database, the code than calculates the appropriate intake and consequently the committed effective dose and organ dose. Calculations with the CINEX code were compared to similar calculations with the CINDY code. The discrepancies were less than 5%, which is the rounding error of the CINDY code. Attached is a table which compares parameters calculated with the CINEX and the CINDY codes (for a class Y uranium). The CINEX is now used at the NRCN to calculate occupational intakes and doses to workers with radioactive materials

  19. Design of variable-weight quadratic congruence code for optical CDMA

    Science.gov (United States)

    Feng, Gang; Cheng, Wen-Qing; Chen, Fu-Jun

    2015-09-01

    A variable-weight code family referred to as variable-weight quadratic congruence code (VWQCC) is constructed by algebraic transformation for incoherent synchronous optical code division multiple access (OCDMA) systems. Compared with quadratic congruence code (QCC), VWQCC doubles the code cardinality and provides the multiple code-sets with variable code-weight. Moreover, the bit-error rate (BER) performance of VWQCC is superior to those of conventional variable-weight codes by removing or padding pulses under the same chip power assumption. The experiment results show that VWQCC can be well applied to the OCDMA with quality of service (QoS) requirements.

  20. An implicit Smooth Particle Hydrodynamic code

    Energy Technology Data Exchange (ETDEWEB)

    Knapp, Charles E. [Univ. of New Mexico, Albuquerque, NM (United States)

    2000-05-01

    An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.

  1. Optimized Method for Generating and Acquiring GPS Gold Codes

    Directory of Open Access Journals (Sweden)

    Khaled Rouabah

    2015-01-01

    Full Text Available We propose a simpler and faster Gold codes generator, which can be efficiently initialized to any desired code, with a minimum delay. Its principle consists of generating only one sequence (code number 1 from which we can produce all the other different signal codes. This is realized by simply shifting this sequence by different delays that are judiciously determined by using the bicorrelation function characteristics. This is in contrast to the classical Linear Feedback Shift Register (LFSR based Gold codes generator that requires, in addition to the shift process, a significant number of logic XOR gates and a phase selector to change the code. The presence of all these logic XOR gates in classical LFSR based Gold codes generator provokes the consumption of an additional time in the generation and acquisition processes. In addition to its simplicity and its rapidity, the proposed architecture, due to the total absence of XOR gates, has fewer resources than the conventional Gold generator and can thus be produced at lower cost. The Digital Signal Processing (DSP implementations have shown that the proposed architecture presents a solution for acquiring Global Positioning System (GPS satellites signals optimally and in a parallel way.

  2. Radiation dosimetry with plane-parallel ionization chambers: An international (IAEA) code of practice

    International Nuclear Information System (INIS)

    Andreo, P.

    1996-01-01

    Research on plane-parallel ionization chambers since the IAEA Code of Practice (TRS-277) was published in 1987 has expanded our knowledge on perturbation and other correction factors in ionization chamber dosimeter, and also constructional details of these chambers have been shown to be important. Different national organizations have published, or are in the process of publishing, recommendations on detailed procedures for the calibration and use of plane-parallel ionization chambers. An international working group was formed under the auspices of the IAEA, first to assess the status and validity of IAEA TRS-277, and second to develop an international Code of Practice for the calibration and use of plane-parallel ionization chambers in high-energy electron and photon beams. The purpose of this work is to describe the forthcoming Code of Practice. (author). 39 refs, 3 figs, 2 tabs

  3. Radiation dosimetry with plane-parallel ionization chambers: An international (IAEA) code of practice

    Energy Technology Data Exchange (ETDEWEB)

    Andreo, P [Lunds Hospital, Lund (Sweden). Radiophysics Dept.; Almond, P R [J.G. Brown Cancer Center, Univ. of Lousville, Lousville, KY (United States). Dept. of Radiation Oncology; Mattsson, O [Sahlgrenska Hospital, Gothenburg (Sweden). Dept. of Radiation Physics; Nahum, A E [Royal Marsden Hospital, Sutton (United Kingdom). Joint Dept. of Physics; Roos, M [Physikalisch-Technische Bundesanstalt, Braunschweig (Germany)

    1996-08-01

    Research on plane-parallel ionization chambers since the IAEA Code of Practice (TRS-277) was published in 1987 has expanded our knowledge on perturbation and other correction factors in ionization chamber dosimeter, and also constructional details of these chambers have been shown to be important. Different national organizations have published, or are in the process of publishing, recommendations on detailed procedures for the calibration and use of plane-parallel ionization chambers. An international working group was formed under the auspices of the IAEA, first to assess the status and validity of IAEA TRS-277, and second to develop an international Code of Practice for the calibration and use of plane-parallel ionization chambers in high-energy electron and photon beams. The purpose of this work is to describe the forthcoming Code of Practice. (author). 39 refs, 3 figs, 2 tabs.

  4. An Auto sequence Code to Integrate a Neutron Unfolding Code with thePC-MCA Accuspec

    International Nuclear Information System (INIS)

    Darsono

    2000-01-01

    In a neutron spectrometry using proton recoil method, the neutronunfolding code is needed to unfold the measured proton spectrum to become theneutron spectrum. The process of the unfolding neutron in the existingneutron spectrometry which was successfully installed last year was doneseparately. This manuscript reports that the auto sequence code to integratethe neutron unfolding code UNFSPEC.EXE with the software facility of thePC-MCA Accuspec has been made and run successfully so that the new neutronspectrometry become compact. The auto sequence code was written based on therules in application program facility of PC-MCA Accuspec and then it wascompiled using AC-EXE. Result of the test of the auto sequence code showedthat for binning width 20, 30, and 40 giving a little different spectrumshape. The binning width around 30 gives a better spectrum in mean of givingsmall error compared to the others. (author)

  5. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    Science.gov (United States)

    Kermek, Dragutin; Novak, Matija

    2016-01-01

    In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…

  6. Performance Analysis of Faulty Gallager-B Decoding of QC-LDPC Codes with Applications

    Directory of Open Access Journals (Sweden)

    O. Al Rasheed

    2014-06-01

    Full Text Available In this paper we evaluate the performance of Gallager-B algorithm, used for decoding low-density parity-check (LDPC codes, under unreliable message computation. Our analysis is restricted to LDPC codes constructed from circular matrices (QC-LDPC codes. Using Monte Carlo simulation we investigate the effects of different code parameters on coding system performance, under a binary symmetric communication channel and independent transient faults model. One possible application of the presented analysis in designing memory architecture with unreliable components is considered.

  7. Motion-adaptive intraframe transform coding of video signals

    NARCIS (Netherlands)

    With, de P.H.N.

    1989-01-01

    Spatial transform coding has been widely applied for image compression because of its high coding efficiency. However, in many intraframe systems, in which every TV frame is independently processed, coding of moving objects in the case of interlaced input signals is not addressed. In this paper, we

  8. Preparation of the TRANSURANUS code for TEMELIN NPP

    International Nuclear Information System (INIS)

    Klouzal, J.

    2011-01-01

    Since 2010 Temelin NPP started using TVSA-T fuel supplied by JSC TVEL. The transition process included implementation of several new core reload design codes. TRANSURANUS code was selected for the evaluation of the fuel rod thermomechanical performance. The adaptation and validation of the code was performed by Nuclear Research Institute Rez. TRANSURANUS code contains wide selection of alternative models for most of phenomena important for the fuel behaviour. It was therefore necessary to select, based on a comparison with experimental data, those most suitable for the modeling of TVSA-T fuel rods. In some cases, new models were implemented. Software tools and methodology for the evaluation of the proposed core reload design using TRANSURANUS code were also developed in NRI. The software tools include the interface to core physics code ANDREA and a set of scripts for an automated execution and processing of the computational runs. Independent confirmation of some of the vendor specified core reload design criteria was performed using TRANSURANUS. (authors)

  9. Reversible machine code and its abstract processor architecture

    DEFF Research Database (Denmark)

    Axelsen, Holger Bock; Glück, Robert; Yokoyama, Tetsuo

    2007-01-01

    A reversible abstract machine architecture and its reversible machine code are presented and formalized. For machine code to be reversible, both the underlying control logic and each instruction must be reversible. A general class of machine instruction sets was proven to be reversible, building...

  10. CRUCIB: an axisymmetric convection code

    International Nuclear Information System (INIS)

    Bertram, L.A.

    1975-03-01

    The CRUCIB code was written in support of an experimental program aimed at measurement of thermal diffusivities of refractory liquids. Precise values of diffusivity are necessary to realistic analysis of reactor safety problems, nuclear waste disposal procedures, and fundamental metal forming processes. The code calculates the axisymmetric transient convective motions produced in a right circular cylindrical crucible, which is surface heated by an annular heat pulse. Emphasis of this report is placed on the input-output options of the CRUCIB code, which are tailored to assess the importance of the convective heat transfer in determining the surface temperature distribution. Use is limited to Prandtl numbers less than unity; larger values can be accommodated by replacement of a single block of the code, if desired. (U.S.)

  11. Is a genome a codeword of an error-correcting code?

    Directory of Open Access Journals (Sweden)

    Luzinete C B Faria

    Full Text Available Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction.

  12. An Investigation of the Methods of Logicalizing the Code-Checking System for Architectural Design Review in New Taipei City

    Directory of Open Access Journals (Sweden)

    Wei-I Lee

    2016-12-01

    Full Text Available The New Taipei City Government developed a Code-checking System (CCS using Building Information Modeling (BIM technology to facilitate an architectural design review in 2014. This system was intended to solve problems caused by cognitive gaps between designer and reviewer in the design review process. Along with considering information technology, the most important issue for the system’s development has been the logicalization of literal building codes. Therefore, to enhance the reliability and performance of the CCS, this study uses the Fuzzy Delphi Method (FDM on the basis of design thinking and communication theory to investigate the semantic difference and cognitive gaps among participants in the design review process and to propose the direction of system development. Our empirical results lead us to recommend grouping multi-stage screening and weighted assisted logicalization of non-quantitative building codes to improve the operability of CCS. Furthermore, CCS should integrate the Expert Evaluation System (EES to evaluate the design value under qualitative building codes.

  13. Performance enhancement of successive interference cancellation scheme based on spectral amplitude coding for optical code-division multiple-access systems using Hadamard codes

    Science.gov (United States)

    Eltaif, Tawfig; Shalaby, Hossam M. H.; Shaari, Sahbudin; Hamarsheh, Mohammad M. N.

    2009-04-01

    A successive interference cancellation scheme is applied to optical code-division multiple-access (OCDMA) systems with spectral amplitude coding (SAC). A detailed analysis of this system, with Hadamard codes used as signature sequences, is presented. The system can easily remove the effect of the strongest signal at each stage of the cancellation process. In addition, simulation of the prose system is performed in order to validate the theoretical results. The system shows a small bit error rate at a large number of active users compared to the SAC OCDMA system. Our results reveal that the proposed system is efficient in eliminating the effect of the multiple-user interference and in the enhancement of the overall performance.

  14. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  15. Mapping Saldana's Coding Methods onto the Literature Review Process

    Science.gov (United States)

    Onwuegbuzie, Anthony J.; Frels, Rebecca K.; Hwang, Eunjin

    2016-01-01

    Onwuegbuzie and Frels (2014) provided a step-by-step guide illustrating how discourse analysis can be used to analyze literature. However, more works of this type are needed to address the way that counselor researchers conduct literature reviews. Therefore, we present a typology for coding and analyzing information extracted for literature…

  16. Methods for the development of large computer codes under LTSS

    International Nuclear Information System (INIS)

    Sicilian, J.M.

    1977-06-01

    TRAC is a large computer code being developed by Group Q-6 for the analysis of the transient thermal hydraulic behavior of light-water nuclear reactors. A system designed to assist the development of TRAC is described. The system consists of a central HYDRA dataset, R6LIB, containing files used in the development of TRAC, and a file maintenance program, HORSE, which facilitates the use of this dataset

  17. Processing of the GALILEO fuel rod code model uncertainties within the AREVA LWR realistic thermal-mechanical analysis methodology

    International Nuclear Information System (INIS)

    Mailhe, P.; Barbier, B.; Garnier, C.; Landskron, H.; Sedlacek, R.; Arimescu, I.; Smith, M.; Bellanger, P.

    2013-01-01

    The availability of reliable tools and associated methodology able to accurately predict the LWR fuel behavior in all conditions is of great importance for safe and economic fuel usage. For that purpose, AREVA has developed its new global fuel rod performance code GALILEO along with its associated realistic thermal-mechanical analysis methodology. This realistic methodology is based on a Monte Carlo type random sampling of all relevant input variables. After having outlined the AREVA realistic methodology, this paper will be focused on the GALILEO code benchmarking process, on its extended experimental database and on the GALILEO model uncertainties assessment. The propagation of these model uncertainties through the AREVA realistic methodology is also presented. This GALILEO model uncertainties processing is of the utmost importance for accurate fuel design margin evaluation as illustrated on some application examples. With the submittal of Topical Report GALILEO to the U.S. NRC in 2013, GALILEO and its methodology are on the way to be industrially used in a wide range of irradiation conditions. (authors)

  18. Qualities of dental chart recording and coding.

    Science.gov (United States)

    Chantravekin, Yosananda; Tasananutree, Munchulika; Santaphongse, Supitcha; Aittiwarapoj, Anchisa

    2013-01-01

    Chart recording and coding are the important processes in the healthcare informatics system, but there were only a few reports in the dentistry field. The objectives of this study are to study the qualities of dental chart recording and coding, as well as the achievement of lecture/workshop on this topic. The study was performed by auditing the patient's charts at the TU Dental Student Clinic from July 2011-August 2012. The chart recording mean scores ranged from 51.0-55.7%, whereas the errors in the coding process were presented in the coder part more than the doctor part. The lecture/workshop could improve the scores only in some topics.

  19. Development of simulation code for FBR spent fuel dissolution with rotary drum type continuous dissolver

    International Nuclear Information System (INIS)

    Sano, Yuichi; Katsurai, Kiyomichi; Washiya, Tadahiro; Koizumi, Tsutomu; Matsumoto, Satoshi

    2011-01-01

    Japan Atomic Energy Agency (JAEA) has been studying rotary drum type continuous dissolver for FBR spent fuel dissolution. For estimating the fuel dissolution behavior under several operational conditions in this dissolver, we have been developing the simulation code, PLUM, which mainly consists of 3 modules for calculating chemical reaction, mass transfer and thermal balance in the rotary drum type continuous dissolver. Under the various conditions where dissolution experiments were carried out with the batch-wise dissolver for FBR spent fuel and with the rotary drum type continuous dissolver for UO 2 fuel, it was confirmed that the fuel dissolution behaviors calculated by the PLUM code showed good agreement with the experimental ones. Based on this result, the condition for obtaining the dissolver solution with high HM (heavy metal : U and Pu) concentration (∼500g/L), which is required for the next step, i.e. crystallization process, was also analyzed by this code and appropriate operational conditions with the rotary drum type continuous dissolver, such as feedrate, concentration and temperature of nitric acid, could be clarified. (author)

  20. Solution of charged particle transport equation by Monte-Carlo method in the BRANDZ code system

    International Nuclear Information System (INIS)

    Artamonov, S.N.; Androsenko, P.A.; Androsenko, A.A.

    1992-01-01

    Consideration is given to the issues of Monte-Carlo employment for the solution of charged particle transport equation and its implementation in the BRANDZ code system under the conditions of real 3D geometry and all the data available on radiation-to-matter interaction in multicomponent and multilayer targets. For the solution of implantation problem the results of BRANDZ data comparison with the experiments and calculations by other codes in complexes systems are presented. The results of direct nuclear pumping process simulation for laser-active media by a proton beam are also included. 4 refs.; 7 figs