WorldWideScience

Sample records for underlying process codes

  1. Kombucha brewing under the Food and Drug Administration model Food Code: risk analysis and processing guidance.

    Science.gov (United States)

    Nummer, Brian A

    2013-11-01

    Kombucha is a fermented beverage made from brewed tea and sugar. The taste is slightly sweet and acidic and it may have residual carbon dioxide. Kombucha is consumed in many countries as a health beverage and it is gaining in popularity in the U.S. Consequently, many retailers and food service operators are seeking to brew this beverage on site. As a fermented beverage, kombucha would be categorized in the Food and Drug Administration model Food Code as a specialized process and would require a variance with submission of a food safety plan. This special report was created to assist both operators and regulators in preparing or reviewing a kombucha food safety plan.

  2. Comparative study of Monte Carlo particle transport code PHITS and nuclear data processing code NJOY for PKA energy spectra and heating number under neutron irradiation

    International Nuclear Information System (INIS)

    Iwamoto, Y.; Ogawa, T.

    2016-01-01

    The modelling of the damage in materials irradiated by neutrons is needed for understanding the mechanism of radiation damage in fission and fusion reactor facilities. The molecular dynamics simulations of damage cascades with full atomic interactions require information about the energy distribution of the Primary Knock on Atoms (PKAs). The most common process to calculate PKA energy spectra under low-energy neutron irradiation is to use the nuclear data processing code NJOY2012. It calculates group-to-group recoil cross section matrices using nuclear data libraries in ENDF data format, which is energy and angular recoil distributions for many reactions. After the NJOY2012 process, SPKA6C is employed to produce PKA energy spectra combining recoil cross section matrices with an incident neutron energy spectrum. However, intercomparison with different processes and nuclear data libraries has not been studied yet. Especially, the higher energy (~5 MeV) of the incident neutrons, compared to fission, leads to many reaction channels, which produces a complex distribution of PKAs in energy and type. Recently, we have developed the event generator mode (EGM) in the Particle and Heavy Ion Transport code System PHITS for neutron incident reactions in the energy region below 20 MeV. The main feature of EGM is to produce PKA with keeping energy and momentum conservation in a reaction. It is used for event-by-event analysis in application fields such as soft error analysis in semiconductors, micro dosimetry in human body, and estimation of Displacement per Atoms (DPA) value in metals and so on. The purpose of this work is to specify differences of PKA spectra and heating number related with kerma between different calculation method using PHITS-EGM and NJOY2012+SPKA6C with different libraries TENDL-2015, ENDF/B-VII.1 and JENDL-4.0 for fusion relevant materials

  3. Process-engineering control valves under the EC codes; Steuerventile fuer die Prozesstechnik im Geltungsbereich der EG-Richtlinien

    Energy Technology Data Exchange (ETDEWEB)

    Gohlke, B. [IMI Norgren Herion Fluidtronic GmbH und Co. KG, Fellbach (Germany)

    2003-09-01

    The European Parliament and European Council have enacted special codes in order to implement uniform conditions in all countries of the European Community. The manufacturers of technical and commercial products are obliged to adhere to these codes. Harmonized standards, which are to be used as a tool for the implementation of the codes, are embedded at another level of the overall 'European reference literature'. Two EC codes, in particular, are definitive for fluids engineering: On the one hand, the EC Machinery Code, 98/37/EC and, on the other hand, the EC Pressurized Equipment Code, 97/23/EC. These EC codes cover, inter alia, machinery and chemical process-engineering plants, and conventional power generating plants. Norgren-Herion, a manufacturer of fluid engineering components, perceived a necessity for positioning its control valves in the scope of applicability of the EC codes. This article describes experience with the EC codes from the control valve manufacturer's point of view and examines the various qualification procedures for control valves. (orig.)

  4. Covariance data processing code. ERRORJ

    International Nuclear Information System (INIS)

    Kosako, Kazuaki

    2001-01-01

    The covariance data processing code, ERRORJ, was developed to process the covariance data of JENDL-3.2. ERRORJ has the processing functions of covariance data for cross sections including resonance parameters, angular distribution and energy distribution. (author)

  5. Comparative study of Monte Carlo particle transport code PHITS and nuclear data processing code NJOY for recoil cross section spectra under neutron irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Iwamoto, Yosuke, E-mail: iwamoto.yosuke@jaea.go.jp; Ogawa, Tatsuhiko

    2017-04-01

    Because primary knock-on atoms (PKAs) create point defects and clusters in materials that are irradiated with neutrons, it is important to validate the calculations of recoil cross section spectra that are used to estimate radiation damage in materials. Here, the recoil cross section spectra of fission- and fusion-relevant materials were calculated using the Event Generator Mode (EGM) of the Particle and Heavy Ion Transport code System (PHITS) and also using the data processing code NJOY2012 with the nuclear data libraries TENDL2015, ENDF/BVII.1, and JEFF3.2. The heating number, which is the integral of the recoil cross section spectra, was also calculated using PHITS-EGM and compared with data extracted from the ACE files of TENDL2015, ENDF/BVII.1, and JENDL4.0. In general, only a small difference was found between the PKA spectra of PHITS + TENDL2015 and NJOY + TENDL2015. From analyzing the recoil cross section spectra extracted from the nuclear data libraries using NJOY2012, we found that the recoil cross section spectra were incorrect for {sup 72}Ge, {sup 75}As, {sup 89}Y, and {sup 109}Ag in the ENDF/B-VII.1 library, and for {sup 90}Zr and {sup 55}Mn in the JEFF3.2 library. From analyzing the heating number, we found that the data extracted from the ACE file of TENDL2015 for all nuclides were problematic in the neutron capture region because of incorrect data regarding the emitted gamma energy. However, PHITS + TENDL2015 can calculate PKA spectra and heating numbers correctly.

  6. MRPP: multiregion processing plant code

    International Nuclear Information System (INIS)

    Kee, C.W.; McNeese, L.E.

    1976-09-01

    The report describes the machine solution of a large number (approximately 52,000) of simultaneous linear algebraic equations in which the unknowns are the concentrations of nuclides in the fuel salt of a fluid-fueled reactor (MSBR) having a continuous fuel processing plant. Most of the equations define concentrations at various points in the processing plant. The code allows as input a generalized description of a processing plant flowsheet; it also performs the iterative adjustment of flowsheet parameters for determination of concentrations throughout the flowsheet, and the associated effect of the specified processing mode on the overall reactor operation

  7. User's manual for a process model code

    International Nuclear Information System (INIS)

    Kern, E.A.; Martinez, D.P.

    1981-03-01

    The MODEL code has been developed for computer modeling of materials processing facilities associated with the nuclear fuel cycle. However, it can also be used in other modeling applications. This report provides sufficient information for a potential user to apply the code to specific process modeling problems. Several examples that demonstrate most of the capabilities of the code are provided

  8. The 1989 ENDF pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.; McLaughlin, P.K.

    1989-12-01

    This document summarizes the 1989 version of the ENDF pre-processing codes which are required for processing evaluated nuclear data coded in the format ENDF-4, ENDF-5, or ENDF-6. The codes are available from the IAEA Nuclear Data Section, free of charge upon request. (author)

  9. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  10. CITOPP, CITMOD, CITWI, Processing codes for CITATION Code

    International Nuclear Information System (INIS)

    Albarhoum, M.

    2008-01-01

    Description of program or function: CITOPP processes the output file of the CITATION 3-D diffusion code. The program can plot axial, radial and circumferential flux distributions (in cylindrical geometry) in addition to the multiplication factor convergence. The flux distributions can be drawn for each group specified by the program and visualized on the screen. CITMOD processes both the output and the input files of the CITATION 3-D diffusion code. CITMOD can visualize both axial, and radial-angular models of the reactor described by CITATION input/output files. CITWI processes the input file (CIT.INP) of CITATION 3-D diffusion code. CIT.INP is processed to deduce the dimensions of the cell whose cross sections can be representative of the homonym reactor component in section 008 of CIT.INP

  11. Synaptic E-I Balance Underlies Efficient Neural Coding.

    Science.gov (United States)

    Zhou, Shanglin; Yu, Yuguo

    2018-01-01

    Both theoretical and experimental evidence indicate that synaptic excitation and inhibition in the cerebral cortex are well-balanced during the resting state and sensory processing. Here, we briefly summarize the evidence for how neural circuits are adjusted to achieve this balance. Then, we discuss how such excitatory and inhibitory balance shapes stimulus representation and information propagation, two basic functions of neural coding. We also point out the benefit of adopting such a balance during neural coding. We conclude that excitatory and inhibitory balance may be a fundamental mechanism underlying efficient coding.

  12. The Coding Process and Its Challenges

    Directory of Open Access Journals (Sweden)

    Judith A. Holton, Ph.D.

    2010-02-01

    Full Text Available Coding is the core process in classic grounded theory methodology. It is through coding that the conceptual abstraction of data and its reintegration as theory takes place. There are two types of coding in a classic grounded theory study: substantive coding, which includes both open and selective coding procedures, and theoretical coding. In substantive coding, the researcher works with the data directly, fracturing and analysing it, initially through open coding for the emergence of a core category and related concepts and then subsequently through theoretical sampling and selective coding of data to theoretically saturate the core and related concepts. Theoretical saturation is achieved through constant comparison of incidents (indicators in the data to elicit the properties and dimensions of each category (code. This constant comparing of incidents continues until the process yields the interchangeability of indicators, meaning that no new properties or dimensions are emerging from continued coding and comparison. At this point, the concepts have achieved theoretical saturation and the theorist shifts attention to exploring the emergent fit of potential theoretical codes that enable the conceptual integration of the core and related concepts to produce hypotheses that account for relationships between the concepts thereby explaining the latent pattern of social behaviour that forms the basis of the emergent theory. The coding of data in grounded theory occurs in conjunction with analysis through a process of conceptual memoing, capturing the theorist’s ideation of the emerging theory. Memoing occurs initially at the substantive coding level and proceeds to higher levels of conceptual abstraction as coding proceeds to theoretical saturation and the theorist begins to explore conceptual reintegration through theoretical coding.

  13. Code-Mixing and Code Switchingin The Process of Learning

    Directory of Open Access Journals (Sweden)

    Diyah Atiek Mustikawati

    2016-09-01

    Full Text Available This study aimed to describe a form of code switching and code mixing specific form found in the teaching and learning activities in the classroom as well as determining factors influencing events stand out that form of code switching and code mixing in question.Form of this research is descriptive qualitative case study which took place in Al Mawaddah Boarding School Ponorogo. Based on the analysis and discussion that has been stated in the previous chapter that the form of code mixing and code switching learning activities in Al Mawaddah Boarding School is in between the use of either language Java language, Arabic, English and Indonesian, on the use of insertion of words, phrases, idioms, use of nouns, adjectives, clauses, and sentences. Code mixing deciding factor in the learning process include: Identification of the role, the desire to explain and interpret, sourced from the original language and its variations, is sourced from a foreign language. While deciding factor in the learning process of code, includes: speakers (O1, partners speakers (O2, the presence of a third person (O3, the topic of conversation, evoke a sense of humour, and just prestige. The significance of this study is to allow readers to see the use of language in a multilingual society, especially in AL Mawaddah boarding school about the rules and characteristics variation in the language of teaching and learning activities in the classroom. Furthermore, the results of this research will provide input to the ustadz / ustadzah and students in developing oral communication skills and the effectiveness of teaching and learning strategies in boarding schools.

  14. The 1996 ENDF pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1996-01-01

    The codes are named 'the Pre-processing' codes, because they are designed to pre-process ENDF/B data, for later, further processing for use in applications. This is a modular set of computer codes, each of which reads and writes evaluated nuclear data in the ENDF/B format. Each code performs one or more independent operations on the data, as described below. These codes are designed to be computer independent, and are presently operational on every type of computer from large mainframe computer to small personal computers, such as IBM-PC and Power MAC. The codes are available from the IAEA Nuclear Data Section, free of charge upon request. (author)

  15. Parallel processing of structural integrity analysis codes

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.

    1996-01-01

    Structural integrity analysis forms an important role in assessing and demonstrating the safety of nuclear reactor components. This analysis is performed using analytical tools such as Finite Element Method (FEM) with the help of digital computers. The complexity of the problems involved in nuclear engineering demands high speed computation facilities to obtain solutions in reasonable amount of time. Parallel processing systems such as ANUPAM provide an efficient platform for realising the high speed computation. The development and implementation of software on parallel processing systems is an interesting and challenging task. The data and algorithm structure of the codes plays an important role in exploiting the parallel processing system capabilities. Structural analysis codes based on FEM can be divided into two categories with respect to their implementation on parallel processing systems. The first category codes such as those used for harmonic analysis, mechanistic fuel performance codes need not require the parallelisation of individual modules of the codes. The second category of codes such as conventional FEM codes require parallelisation of individual modules. In this category, parallelisation of equation solution module poses major difficulties. Different solution schemes such as domain decomposition method (DDM), parallel active column solver and substructuring method are currently used on parallel processing systems. Two codes, FAIR and TABS belonging to each of these categories have been implemented on ANUPAM. The implementation details of these codes and the performance of different equation solvers are highlighted. (author). 5 refs., 12 figs., 1 tab

  16. The 1992 ENDF Pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1992-01-01

    This document summarizes the 1992 version of the ENDF pre-processing codes which are required for processing evaluated nuclear data coded in the format ENDF-4, ENDF-5, or ENDF-6. Included are the codes CONVERT, MERGER, LINEAR, RECENT, SIGMA1, LEGEND, FIXUP, GROUPIE, DICTION, MIXER, VIRGIN, COMPLOT, EVALPLOT, RELABEL. Some of the functions of these codes are: to calculate cross-sections from resonance parameters; to calculate angular distributions, group average, mixtures of cross-sections, etc; to produce graphical plottings and data comparisons. The codes are designed to operate on virtually any type of computer including PC's. They are available from the IAEA Nuclear Data Section, free of charge upon request, on magnetic tape or a set of HD diskettes. (author)

  17. Arabic Natural Language Processing System Code Library

    Science.gov (United States)

    2014-06-01

    Adelphi, MD 20783-1197 This technical note provides a brief description of a Java library for Arabic natural language processing ( NLP ) containing code...for training and applying the Arabic NLP system described in the paper "A Cross-Task Flexible Transition Model for Arabic Tokenization, Affix...and also English) natural language processing ( NLP ), containing code for training and applying the Arabic NLP system described in Stephen Tratz’s

  18. Investigating the Simulink Auto-Coding Process

    Science.gov (United States)

    Gualdoni, Matthew J.

    2016-01-01

    Model based program design is the most clear and direct way to develop algorithms and programs for interfacing with hardware. While coding "by hand" results in a more tailored product, the ever-growing size and complexity of modern-day applications can cause the project work load to quickly become unreasonable for one programmer. This has generally been addressed by splitting the product into separate modules to allow multiple developers to work in parallel on the same project, however this introduces new potentials for errors in the process. The fluidity, reliability and robustness of the code relies on the abilities of the programmers to communicate their methods to one another; furthermore, multiple programmers invites multiple potentially differing coding styles into the same product, which can cause a loss of readability or even module incompatibility. Fortunately, Mathworks has implemented an auto-coding feature that allows programmers to design their algorithms through the use of models and diagrams in the graphical programming environment Simulink, allowing the designer to visually determine what the hardware is to do. From here, the auto-coding feature handles converting the project into another programming language. This type of approach allows the designer to clearly see how the software will be directing the hardware without the need to try and interpret large amounts of code. In addition, it speeds up the programming process, minimizing the amount of man-hours spent on a single project, thus reducing the chance of human error as well as project turnover time. One such project that has benefited from the auto-coding procedure is Ramses, a portion of the GNC flight software on-board Orion that has been implemented primarily in Simulink. Currently, however, auto-coding Ramses into C++ requires 5 hours of code generation time. This causes issues if the tool ever needs to be debugged, as this code generation will need to occur with each edit to any part of

  19. Calculation code revised MIXSET for Purex process

    International Nuclear Information System (INIS)

    Gonda, Kozo; Oka, Koichiro; Fukuda, Shoji.

    1979-02-01

    Revised MIXSET is a FORTRAN IV calculation code developed to simulate steady and transient behaviors of the Purex extraction process and calculate the optimum operating condition of the process. Revised MIXSET includes all the functions of MIXSET code as shown below. a) Maximum chemical system of eight components can be handled with or without mutual dependence of the distribution of components. b) The flowrate and concentration of feed can be renewed successively at any state, transient or steady, for searching optimum operating conditions. c) Optimum inputs of feed concentrations and flowrates can be calculated to satisfy both of specification and recovery rate of a product. d) Radioactive decay reactions can be handled on each component. Besides these functions, the following chemical reactions concerned in Purex process are newly-included in Revised MIXSET code and the quantitative changes of components such as H + , U(IV), U(VI), Pu(III), Pu(IV), NH 2 OH, N 2 H 4 can be simulated. 1st Gr. (i) reduction of Pu(IV); U 4+ + 2Pu 4+ + 2H 2 O → UO 2 2+ + 2Pu 3+ + 4H + . (ii) oxidation of Pu(III); 2Pu 3+ + 3H + + NO 3 - → 2Pu 4+ + HNO 2 + H 2 O. (iii) oxidation of U(IV); U 4+ + NO 3 - + H 2 O → UO 2 2+ + H + + HNO 2 2U 4+ + O 2 + 2H 2 O → 2UO 2 2+ + 4H + . (iv) decomposition of HNO 2 ; HNO 2 + N 2 H 5 + → HN 3 + 2H 2 O + H + . (author)

  20. Calculation code MIXSET for Purex process

    International Nuclear Information System (INIS)

    Gonda, Kozo; Fukuda, Shoji.

    1977-09-01

    MIXSET is a FORTRAN IV calculation code for Purex process that simulate the dynamic behavior of solvent extraction processes in mixer-settlers. Two options permit terminating dynamic phase by time or by achieving steady state. These options also permit continuing calculation successively using new inputs from a arbitrary phase. A third option permits artificial rapid close to steady state and a fourth option permits searching optimum input to satisfy both of specification and recovery rate of product. MIXSET handles maximum chemical system of eight components with or without mutual dependence of the distribution of the components. The chemical system in MIXSET includes chemical reactions and/or decaying reaction. Distribution data can be supplied by third-power polynominal equations or tables, and kinetic data by tables or given constants. The fluctuation of the interfacial level height in settler is converted into the flow rate changes of organic and aqueous stream to follow dynamic behavior of extraction process in detail. MIXSET can be applied to flowsheet study, start up and/or shut down procedure study and real time process management in countercurrent solvent extraction processes. (auth.)

  1. A code inspection process for security reviews

    Science.gov (United States)

    Garzoglio, Gabriele

    2010-04-01

    In recent years, it has become more and more evident that software threat communities are taking an increasing interest in Grid infrastructures. To mitigate the security risk associated with the increased numbers of attacks, the Grid software development community needs to scale up effort to reduce software vulnerabilities. This can be achieved by introducing security review processes as a standard project management practice. The Grid Facilities Department of the Fermilab Computing Division has developed a code inspection process, tailored to reviewing security properties of software. The goal of the process is to identify technical risks associated with an application and their impact. This is achieved by focusing on the business needs of the application (what it does and protects), on understanding threats and exploit communities (what an exploiter gains), and on uncovering potential vulnerabilities (what defects can be exploited). The desired outcome of the process is an improvement of the quality of the software artifact and an enhanced understanding of possible mitigation strategies for residual risks. This paper describes the inspection process and lessons learned on applying it to Grid middleware.

  2. A code inspection process for security reviews

    Energy Technology Data Exchange (ETDEWEB)

    Garzoglio, Gabriele; /Fermilab

    2009-05-01

    In recent years, it has become more and more evident that software threat communities are taking an increasing interest in Grid infrastructures. To mitigate the security risk associated with the increased numbers of attacks, the Grid software development community needs to scale up effort to reduce software vulnerabilities. This can be achieved by introducing security review processes as a standard project management practice. The Grid Facilities Department of the Fermilab Computing Division has developed a code inspection process, tailored to reviewing security properties of software. The goal of the process is to identify technical risks associated with an application and their impact. This is achieved by focusing on the business needs of the application (what it does and protects), on understanding threats and exploit communities (what an exploiter gains), and on uncovering potential vulnerabilities (what defects can be exploited). The desired outcome of the process is an improvement of the quality of the software artifact and an enhanced understanding of possible mitigation strategies for residual risks. This paper describes the inspection process and lessons learned on applying it to Grid middleware.

  3. A code inspection process for security reviews

    International Nuclear Information System (INIS)

    Garzoglio, Gabriele

    2010-01-01

    In recent years, it has become more and more evident that software threat communities are taking an increasing interest in Grid infrastructures. To mitigate the security risk associated with the increased numbers of attacks, the Grid software development community needs to scale up effort to reduce software vulnerabilities. This can be achieved by introducing security review processes as a standard project management practice. The Grid Facilities Department of the Fermilab Computing Division has developed a code inspection process, tailored to reviewing security properties of software. The goal of the process is to identify technical risks associated with an application and their impact. This is achieved by focusing on the business needs of the application (what it does and protects), on understanding threats and exploit communities (what an exploiter gains), and on uncovering potential vulnerabilities (what defects can be exploited). The desired outcome of the process is an improvement of the quality of the software artifact and an enhanced understanding of possible mitigation strategies for residual risks. This paper describes the inspection process and lessons learned on applying it to Grid middleware.

  4. Speech and audio processing for coding, enhancement and recognition

    CERN Document Server

    Togneri, Roberto; Narasimha, Madihally

    2015-01-01

    This book describes the basic principles underlying the generation, coding, transmission and enhancement of speech and audio signals, including advanced statistical and machine learning techniques for speech and speaker recognition with an overview of the key innovations in these areas. Key research undertaken in speech coding, speech enhancement, speech recognition, emotion recognition and speaker diarization are also presented, along with recent advances and new paradigms in these areas. ·         Offers readers a single-source reference on the significant applications of speech and audio processing to speech coding, speech enhancement and speech/speaker recognition. Enables readers involved in algorithm development and implementation issues for speech coding to understand the historical development and future challenges in speech coding research; ·         Discusses speech coding methods yielding bit-streams that are multi-rate and scalable for Voice-over-IP (VoIP) Networks; ·     �...

  5. Qualifying codes under software quality assurance: Two examples as guidelines for codes that are existing or under development

    Energy Technology Data Exchange (ETDEWEB)

    Mangold, D.

    1993-05-01

    Software quality assurance is an area of concem for DOE, EPA, and other agencies due to the poor quality of software and its documentation they have received in the past. This report briefly summarizes the software development concepts and terminology increasingly employed by these agencies and provides a workable approach to scientific programming under the new requirements. Following this is a practical description of how to qualify a simulation code, based on a software QA plan that has been reviewed and officially accepted by DOE/OCRWM. Two codes have recently been baselined and qualified, so that they can be officially used for QA Level 1 work under the DOE/OCRWM QA requirements. One of them was baselined and qualified within one week. The first of the codes was the multi-phase multi-component flow code TOUGH version 1, an already existing code, and the other was a geochemistry transport code STATEQ that was under development The way to accomplish qualification for both types of codes is summarized in an easy-to-follow step-by step fashion to illustrate how to baseline and qualify such codes through a relatively painless procedure.

  6. Qualifying codes under software quality assurance: Two examples as guidelines for codes that are existing or under development

    International Nuclear Information System (INIS)

    Mangold, D.

    1993-05-01

    Software quality assurance is an area of concern for DOE, EPA, and other agencies due to the poor quality of software and its documentation they have received in the past. This report briefly summarizes the software development concepts and terminology increasingly employed by these agencies and provides a workable approach to scientific programming under the new requirements. Following this is a practical description of how to qualify a simulation code, based on a software QA plan that has been reviewed and officially accepted by DOE/OCRWM. Two codes have recently been baselined and qualified, so that they can be officially used for QA Level 1 work under the DOE/OCRWM QA requirements. One of them was baselined and qualified within one week. The first of the codes was the multi-phase multi-component flow code TOUGH version 1, an already existing code, and the other was a geochemistry transport code STATEQ that was under development The way to accomplish qualification for both types of codes is summarized in an easy-to-follow step-by step fashion to illustrate how to baseline and qualify such codes through a relatively painless procedure

  7. Pre-processing of input files for the AZTRAN code

    International Nuclear Information System (INIS)

    Vargas E, S.; Ibarra, G.

    2017-09-01

    The AZTRAN code began to be developed in the Nuclear Engineering Department of the Escuela Superior de Fisica y Matematicas (ESFM) of the Instituto Politecnico Nacional (IPN) with the purpose of numerically solving various models arising from the physics and engineering of nuclear reactors. The code is still under development and is part of the AZTLAN platform: Development of a Mexican platform for the analysis and design of nuclear reactors. Due to the complexity to generate an input file for the code, a script based on D language is developed, with the purpose of making its elaboration easier, based on a new input file format which includes specific cards, which have been divided into two blocks, mandatory cards and optional cards, including a pre-processing of the input file to identify possible errors within it, as well as an image generator for the specific problem based on the python interpreter. (Author)

  8. Description of ground motion data processing codes: Volume 3

    International Nuclear Information System (INIS)

    Sanders, M.L.

    1988-02-01

    Data processing codes developed to process ground motion at the Nevada Test Site for the Weapons Test Seismic Investigations Project are used today as part of the program to process ground motion records for the Nevada Nuclear Waste Storage Investigations Project. The work contained in this report documents and lists codes and verifies the ''PSRV'' code. 39 figs

  9. Obsolescence : The underlying processes

    NARCIS (Netherlands)

    Thomsen, A.F.; Nieboer, N.E.T.; Van der Flier, C.L.

    2015-01-01

    Obsolescence, defined as the process of declining performance of buildings, is a serious threat for the value, the usefulness and the life span of housing properties. Thomsen and van der Flier (2011) developed a model in which obsolescence is categorised on the basis of two distinctions, namely

  10. Consistent Code Qualification Process and Application to WWER-1000 NPP

    International Nuclear Information System (INIS)

    Berthon, A.; Petruzzi, A.; Giannotti, W.; D'Auria, F.; Reventos, F.

    2006-01-01

    Calculation analysis by application of the system codes are performed to evaluate the NPP or the facility behavior during a postulated transient or to evaluate the code capability. The calculation analysis constitutes a process that involves the code itself, the data of the reference plant, the data about the transient, the nodalization, and the user. All these elements affect one each other and affect the results. A major issue in the use of mathematical model is constituted by the model capability to reproduce the plant or facility behavior under steady state and transient conditions. These aspects constitute two main checks that must be satisfied during the qualification process. The first of them is related to the realization of a scheme of the reference plant; the second one is related to the capability to reproduce the transient behavior. The aim of this paper is to describe the UMAE (Uncertainty Method based on Accuracy Extrapolation) methodology developed at University of Pisa for qualifying a nodalization and analysing the calculated results and to perform the uncertainty evaluation of the system code by the CIAU code (Code with the capability of Internal Assessment of Uncertainty). The activity consists with the re-analysis of the Experiment BL-44 (SBLOCA) performed in the LOBI facility and the analysis of a Kv-scaling calculation of the WWER-1000 NPP nodalization taking as reference the test BL-44. Relap5/Mod3.3 has been used as thermal-hydraulic system code and the standard procedure adopted at University of Pisa has been applied to show the capability of the code to predict the significant aspects of the transient and to obtain a qualified nodalization of the WWER-1000 through a systematic qualitative and quantitative accuracy evaluation. The qualitative accuracy evaluation is based on the selection of Relevant Thermal-hydraulic Aspects (RTAs) and is a prerequisite to the application of the Fast Fourier Transform Based Method (FFTBM) which quantifies

  11. Electronic data processing codes for California wildland plants

    Science.gov (United States)

    Merton J. Reed; W. Robert Powell; Bur S. Bal

    1963-01-01

    Systematized codes for plant names are helpful to a wide variety of workers who must record the identity of plants in the field. We have developed such codes for a majority of the vascular plants encountered on California wildlands and have published the codes in pocket size, using photo-reductions of the output from data processing machines. A limited number of the...

  12. Research on pre-processing of QR Code

    Science.gov (United States)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  13. Parallel processing Monte Carlo radiation transport codes

    International Nuclear Information System (INIS)

    McKinney, G.W.

    1994-01-01

    Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine

  14. 78 FR 18321 - International Code Council: The Update Process for the International Codes and Standards

    Science.gov (United States)

    2013-03-26

    ... Energy Conservation Code. International Existing Building Code. International Fire Code. International... Code. International Property Maintenance Code. International Residential Code. International Swimming Pool and Spa Code International Wildland-Urban Interface Code. International Zoning Code. ICC Standards...

  15. Offer and Acceptance under the Russian Civil Code

    Directory of Open Access Journals (Sweden)

    Valery Musin

    2013-01-01

    Full Text Available The article deals with a procedure of entering into a contract under Russian civil law both at the domestic and foreign markets. An offer and an acceptance are considered in the light or relevant provisions of the Russian Civil Codes of 1922, 1964 and that currently effective as compared with rules of the UN Convention on Contracts for the International Sale of Goods 1980 and INIDROIT Principles of International Commercial Contracts 2010.

  16. Offer and acceptance under the Russian Civil Code

    OpenAIRE

    Musin, Valery

    2013-01-01

    The article deals with a procedure of entering into a contract under Russian civil law both at the domestic and foreign markets. An offer and an acceptance are considered in the light or relevant provisions of the Russian Civil Codes of 1922, 1964 and that currently effective as compared with rules of the UN Convention on Contracts for the International Sale of Goods 1980 and INIDROIT Principles of International Commercial Contracts 2010.

  17. Data processing with microcode designed with source coding

    Science.gov (United States)

    McCoy, James A; Morrison, Steven E

    2013-05-07

    Programming for a data processor to execute a data processing application is provided using microcode source code. The microcode source code is assembled to produce microcode that includes digital microcode instructions with which to signal the data processor to execute the data processing application.

  18. Coding strategies for cochlear implants under adverse environments

    Science.gov (United States)

    Tahmina, Qudsia

    Cochlear implants are electronic prosthetic devices that restores partial hearing in patients with severe to profound hearing loss. Although most coding strategies have significantly improved the perception of speech in quite listening conditions, there remains limitations on speech perception under adverse environments such as in background noise, reverberation and band-limited channels, and we propose strategies that improve the intelligibility of speech transmitted over the telephone networks, reverberated speech and speech in the presence of background noise. For telephone processed speech, we propose to examine the effects of adding low-frequency and high- frequency information to the band-limited telephone speech. Four listening conditions were designed to simulate the receiving frequency characteristics of telephone handsets. Results indicated improvement in cochlear implant and bimodal listening when telephone speech was augmented with high frequency information and therefore this study provides support for design of algorithms to extend the bandwidth towards higher frequencies. The results also indicated added benefit from hearing aids for bimodal listeners in all four types of listening conditions. Speech understanding in acoustically reverberant environments is always a difficult task for hearing impaired listeners. Reverberated sounds consists of direct sound, early reflections and late reflections. Late reflections are known to be detrimental to speech intelligibility. In this study, we propose a reverberation suppression strategy based on spectral subtraction to suppress the reverberant energies from late reflections. Results from listening tests for two reverberant conditions (RT60 = 0.3s and 1.0s) indicated significant improvement when stimuli was processed with SS strategy. The proposed strategy operates with little to no prior information on the signal and the room characteristics and therefore, can potentially be implemented in real-time CI

  19. Component processes underlying future thinking.

    Science.gov (United States)

    D'Argembeau, Arnaud; Ortoleva, Claudia; Jumentier, Sabrina; Van der Linden, Martial

    2010-09-01

    This study sought to investigate the component processes underlying the ability to imagine future events, using an individual-differences approach. Participants completed several tasks assessing different aspects of future thinking (i.e., fluency, specificity, amount of episodic details, phenomenology) and were also assessed with tasks and questionnaires measuring various component processes that have been hypothesized to support future thinking (i.e., executive processes, visual-spatial processing, relational memory processing, self-consciousness, and time perspective). The main results showed that executive processes were correlated with various measures of future thinking, whereas visual-spatial processing abilities and time perspective were specifically related to the number of sensory descriptions reported when specific future events were imagined. Furthermore, individual differences in self-consciousness predicted the subjective feeling of experiencing the imagined future events. These results suggest that future thinking involves a collection of processes that are related to different facets of future-event representation.

  20. Dilute: A code for studying beam evolution under rf noise

    International Nuclear Information System (INIS)

    Shih, H.; Ellison, J.A.; Schiesser, W.E.

    1993-01-01

    Longitudinal beam dynamics under rf noise has been modeled by Dome, Krinsky, and Wang using a diffusion-in-action PDE. If the primary interest is the evolution of the beam in action, it is much simpler to integrate the model PDE than to undertake tracking simulations. Here we describe the code that we developed to solve the model PDE using the numerical Method of Lines. Features of the code include (1) computation of the distribution in action for the initial beam from a Gaussian or user-supplied distribution in longitudinal phase space, (2) computation of the diffusion coefficient for white noise or from a user-supplied spectral density for non-white noise, (3) discretization of the model PDE using finite-difference or Galerkin finite-element approximations with a uniform or non-uniform grid, and (4) integration of the system of ODEs in time by the solver RKF45 or a user-supplied ODE solver

  1. Software quality and process improvement in scientific simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Ambrosiano, J.; Webster, R. [Los Alamos National Lab., NM (United States)

    1997-11-01

    This report contains viewgraphs on the quest to develope better simulation code quality through process modeling and improvement. This study is based on the experience of the authors and interviews with ten subjects chosen from simulation code development teams at LANL. This study is descriptive rather than scientific.

  2. Performance analysis of simultaneous dense coding protocol under decoherence

    Science.gov (United States)

    Huang, Zhiming; Zhang, Cai; Situ, Haozhen

    2017-09-01

    The simultaneous dense coding (SDC) protocol is useful in designing quantum protocols. We analyze the performance of the SDC protocol under the influence of noisy quantum channels. Six kinds of paradigmatic Markovian noise along with one kind of non-Markovian noise are considered. The joint success probability of both receivers and the success probabilities of one receiver are calculated for three different locking operators. Some interesting properties have been found, such as invariance and symmetry. Among the three locking operators we consider, the SWAP gate is most resistant to noise and results in the same success probabilities for both receivers.

  3. Coded Ultrasound for Blood Flow Estimation Using Subband Processing

    DEFF Research Database (Denmark)

    Gran, Fredrik; Udesen, Jesper; Nielsen, Michael Bachamnn

    2008-01-01

    the excitation signal is broadband and has good spatial resolution after pulse compression. This means that time can be saved by using the same data for B-mode imaging and blood flow estimation. Two different coding schemes are used in this paper, Barker codes and Golay codes. The performance of the codes......This paper investigates the use of coded excitation for blood flow estimation in medical ultrasound. Traditional autocorrelation estimators use narrow-band excitation signals to provide sufficient signal-to-noise-ratio (SNR) and velocity estimation performance. In this paper, broadband coded...... signals are used to increase SNR, followed by subband processing. The received broadband signal is filtered using a set of narrow-band filters. Estimating the velocity in each of the bands and averaging the results yields better performance compared with what would be possible when transmitting a narrow...

  4. Cell-assembly coding in several memory processes.

    Science.gov (United States)

    Sakurai, Y

    1998-01-01

    The present paper discusses why the cell assembly, i.e., an ensemble population of neurons with flexible functional connections, is a tenable view of the basic code for information processes in the brain. The main properties indicating the reality of cell-assembly coding are neurons overlaps among different assemblies and connection dynamics within and among the assemblies. The former can be detected as multiple functions of individual neurons in processing different kinds of information. Individual neurons appear to be involved in multiple information processes. The latter can be detected as changes of functional synaptic connections in processing different kinds of information. Correlations of activity among some of the recorded neurons appear to change in multiple information processes. Recent experiments have compared several different memory processes (tasks) and detected these two main properties, indicating cell-assembly coding of memory in the working brain. The first experiment compared different types of processing of identical stimuli, i.e., working memory and reference memory of auditory stimuli. The second experiment compared identical processes of different types of stimuli, i.e., discriminations of simple auditory, simple visual, and configural auditory-visual stimuli. The third experiment compared identical processes of different types of stimuli with or without temporal processing of stimuli, i.e., discriminations of elemental auditory, configural auditory-visual, and sequential auditory-visual stimuli. Some possible features of the cell-assembly coding, especially "dual coding" by individual neurons and cell assemblies, are discussed for future experimental approaches. Copyright 1998 Academic Press.

  5. Applicability of vector processing to large-scale nuclear codes

    International Nuclear Information System (INIS)

    Ishiguro, Misako; Harada, Hiroo; Matsuura, Toshihiko; Okuda, Motoi; Ohta, Fumio; Umeya, Makoto.

    1982-03-01

    To meet the growing trend of computational requirements in JAERI, introduction of a high-speed computer with vector processing faculty (a vector processor) is desirable in the near future. To make effective use of a vector processor, appropriate optimization of nuclear codes to pipelined-vector architecture is vital, which will pose new problems concerning code development and maintenance. In this report, vector processing efficiency is assessed with respect to large-scale nuclear codes by examining the following items: 1) The present feature of computational load in JAERI is analyzed by compiling the computer utilization statistics. 2) Vector processing efficiency is estimated for the ten heavily-used nuclear codes by analyzing their dynamic behaviors run on a scalar machine. 3) Vector processing efficiency is measured for the other five nuclear codes by using the current vector processors, FACOM 230-75 APU and CRAY-1. 4) Effectiveness of applying a high-speed vector processor to nuclear codes is evaluated by taking account of the characteristics in JAERI jobs. Problems of vector processors are also discussed from the view points of code performance and ease of use. (author)

  6. Colors and geometric forms in the work process information coding

    Directory of Open Access Journals (Sweden)

    Čizmić Svetlana

    2006-01-01

    Full Text Available The aim of the research was to establish the meaning of the colors and geometric shapes in transmitting information in the work process. The sample of 100 students connected 50 situations which could be associated with regular tasks in the work process with 12 colors and 4 geometric forms in previously chosen color. Based on chosen color-geometric shape-situation regulation, the idea of the research was to find out regularities in coding of information and to examine if those regularities can provide meaningful data assigned to each individual code and to explain which codes are better and applicable represents of examined situations.

  7. Development code for group constant processing

    International Nuclear Information System (INIS)

    Su'ud, Z.

    1997-01-01

    In this paper methods, formalism and algorithm related to group constant processing problem from basic library such as ENDF/B VI will be described. Basically the problems can be grouped as follows; the treatment of resolved resonance using NR approximation, the treatment of unresolved resonance using statistical method, the treatment of low lying resonance using intermediate resonance approximation, the treatment of thermal energy regions, and the treatment group transfer matrices cross sections. it is necessary to treat interference between resonance properly especially in the unresolved region. in this paper the resonance problems are treated based on Breit-wigner method, and doppler function is treated using Pade approximation for calculation efficiency. finally, some samples of calculational result for some nuclei, mainly for comparison between many methods are discussed in this paper

  8. 75 FR 19944 - International Code Council: The Update Process for the International Codes and Standards

    Science.gov (United States)

    2010-04-16

    ... documents from ICC's Chicago District Office: International Code Council, 4051 W Flossmoor Road, Country... Energy Conservation Code. International Existing Building Code. International Fire Code. International...

  9. Progress on China nuclear data processing code system

    Science.gov (United States)

    Liu, Ping; Wu, Xiaofei; Ge, Zhigang; Li, Songyang; Wu, Haicheng; Wen, Lili; Wang, Wenming; Zhang, Huanyu

    2017-09-01

    China is developing the nuclear data processing code Ruler, which can be used for producing multi-group cross sections and related quantities from evaluated nuclear data in the ENDF format [1]. The Ruler includes modules for reconstructing cross sections in all energy range, generating Doppler-broadened cross sections for given temperature, producing effective self-shielded cross sections in unresolved energy range, calculating scattering cross sections in thermal energy range, generating group cross sections and matrices, preparing WIMS-D format data files for the reactor physics code WIMS-D [2]. Programming language of the Ruler is Fortran-90. The Ruler is tested for 32-bit computers with Windows-XP and Linux operating systems. The verification of Ruler has been performed by comparison with calculation results obtained by the NJOY99 [3] processing code. The validation of Ruler has been performed by using WIMSD5B code.

  10. Data processing codes for fatigue and tensile tests

    International Nuclear Information System (INIS)

    Sanchez Sarmiento, Gustavo; Iorio, A.F.; Crespi, J.C.

    1981-01-01

    The processing of fatigue and tensile tests data in order to obtain several parameters of engineering interest requires a considerable effort of numerical calculus. In order to reduce the time spent in this work and to establish standard data processing from a set of similar type tests, it is very advantageous to have a calculation code for running in a computer. Two codes have been developed in FORTRAN language; one of them predicts cyclic properties of materials from the monotonic and incremental or multiple cyclic step tests (ENSPRED CODE), and the other one reduces data coming from strain controlled low cycle fatigue tests (ENSDET CODE). Two examples are included using Zircaloy-4 material from different manufacturers. (author) [es

  11. Case studies in Gaussian process modelling of computer codes

    International Nuclear Information System (INIS)

    Kennedy, Marc C.; Anderson, Clive W.; Conti, Stefano; O'Hagan, Anthony

    2006-01-01

    In this paper we present a number of recent applications in which an emulator of a computer code is created using a Gaussian process model. Tools are then applied to the emulator to perform sensitivity analysis and uncertainty analysis. Sensitivity analysis is used both as an aid to model improvement and as a guide to how much the output uncertainty might be reduced by learning about specific inputs. Uncertainty analysis allows us to reflect output uncertainty due to unknown input parameters, when the finished code is used for prediction. The computer codes themselves are currently being developed within the UK Centre for Terrestrial Carbon Dynamics

  12. Summary of ENDF/B pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1981-12-01

    This document contains the summary documentation for the ENDF/B pre-processing codes: LINEAR, RECENT, SIGMA1, GROUPIE, EVALPLOT, MERGER, DICTION, CONVERT. This summary documentation is merely a copy of the comment cards that appear at the beginning of each programme; these comment cards always reflect the latest status of input options, etc. For the latest published documentation on the methods used in these codes see UCRL-50400, Vol.17 parts A-E, Lawrence Livermore Laboratory (1979)

  13. ERRORJ. Covariance processing code. Version 2.2

    International Nuclear Information System (INIS)

    Chiba, Go

    2004-07-01

    ERRORJ is the covariance processing code that can produce covariance data of multi-group cross sections, which are essential for uncertainty analyses of nuclear parameters, such as neutron multiplication factor. The ERRORJ code can process the covariance data of cross sections including resonance parameters, angular and energy distributions of secondary neutrons. Those covariance data cannot be processed by the other covariance processing codes. ERRORJ has been modified and the version 2.2 has been developed. This document describes the modifications and how to use. The main topics of the modifications are as follows. Non-diagonal elements of covariance matrices are calculated in the resonance energy region. Option for high-speed calculation is implemented. Perturbation amount is optimized in a sensitivity calculation. Effect of the resonance self-shielding on covariance of multi-group cross section can be considered. It is possible to read a compact covariance format proposed by N.M. Larson. (author)

  14. A Realistic Model under which the Genetic Code is Optimal

    NARCIS (Netherlands)

    Buhrman, H.; van der Gulik, P.T.S.; Klau, G.W.; Schaffner, C.; Speijer, D.; Stougie, L.

    2013-01-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By

  15. Development of process simulation code for reprocessing plant and process analysis for solvent degradation and solvent washing waste

    International Nuclear Information System (INIS)

    Tsukada, Tsuyoshi; Takahashi, Keiki

    1999-01-01

    We developed a process simulation code for an entire nuclear fuel reprocessing plant. The code can be used on a PC. Almost all of the equipment in the reprocessing plant is included in the code and the mass balance model of each item of equipment is based on the distribution factors of flow-out streams. All models are connected between the outlet flow and the inlet flow according to the process flow sheet. We estimated the amount of DBP from TBP degradation in the entire process by using the developed code. Most of the DBP is generated in the Pu refining process by the effect of α radiation from Pu, which is extracted in a solvent. On the other hand, very little of DBP is generated in the U refining process. We therefore propose simplification of the solvent washing process and volume reduction of the alkali washing waste in the U refining process. The first Japanese commercial reprocessing plant is currently under construction at Rokkasho Mura, Recently, for the sake of process simplification, the original process design has been changed. Using our code, we analyzed the original process and the simplified process. According our results, the volume of alkali waste solution in the low-level liquid treatment process will be reduced by half in the simplified process. (author)

  16. Ensemble coding remains accurate under object and spatial visual working memory load.

    Science.gov (United States)

    Epstein, Michael L; Emmanouil, Tatiana A

    2017-10-01

    A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants' accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.

  17. Post-processing of the TRAC code's results

    International Nuclear Information System (INIS)

    Baron, J.H.; Neuman, D.

    1987-01-01

    The TRAC code serves for the analysis of accidents in nuclear installations from the thermohydraulic point of view. A program has been developed with the aim of processing the information rapidly generated by the code, with screening graph capacity, both in high and low resolution, or either in paper through printer or plotter. Although the programs are intended to be used after the TRAC runs, they may be also used even when the program is running so as to observe the calculation process. The advantages of employing this type of tool, its actual capacity and its possibilities of expansion according to the user's needs are herein described. (Author)

  18. ERRORJ. Covariance processing code system for JENDL. Version 2

    International Nuclear Information System (INIS)

    Chiba, Gou

    2003-09-01

    ERRORJ is the covariance processing code system for Japanese Evaluated Nuclear Data Library (JENDL) that can produce group-averaged covariance data to apply it to the uncertainty analysis of nuclear characteristics. ERRORJ can treat the covariance data for cross sections including resonance parameters as well as angular distributions and energy distributions of secondary neutrons which could not be dealt with by former covariance processing codes. In addition, ERRORJ can treat various forms of multi-group cross section and produce multi-group covariance file with various formats. This document describes an outline of ERRORJ and how to use it. (author)

  19. Linking CATHENA with other computer codes through a remote process

    Energy Technology Data Exchange (ETDEWEB)

    Vasic, A.; Hanna, B.N.; Waddington, G.M. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada); Sabourin, G. [Atomic Energy of Canada Limited, Montreal, Quebec (Canada); Girard, R. [Hydro-Quebec, Montreal, Quebec (Canada)

    2005-07-01

    'Full text:' CATHENA (Canadian Algorithm for THErmalhydraulic Network Analysis) is a computer code developed by Atomic Energy of Canada Limited (AECL). The code uses a transient, one-dimensional, two-fluid representation of two-phase flow in piping networks. CATHENA is used primarily for the analysis of postulated upset conditions in CANDU reactors; however, the code has found a wider range of applications. In the past, the CATHENA thermalhydraulics code included other specialized codes, i.e. ELOCA and the Point LEPreau CONtrol system (LEPCON) as callable subroutine libraries. The combined program was compiled and linked as a separately named code. This code organizational process is not suitable for independent development, maintenance, validation and version tracking of separate computer codes. The alternative solution to provide code development independence is to link CATHENA to other computer codes through a Parallel Virtual Machine (PVM) interface process. PVM is a public domain software package, developed by Oak Ridge National Laboratory and enables a heterogeneous collection of computers connected by a network to be used as a single large parallel machine. The PVM approach has been well accepted by the global computing community and has been used successfully for solving large-scale problems in science, industry, and business. Once development of the appropriate interface for linking independent codes through PVM is completed, future versions of component codes can be developed, distributed separately and coupled as needed by the user. This paper describes the coupling of CATHENA to the ELOCA-IST and the TROLG2 codes through a PVM remote process as an illustration of possible code connections. ELOCA (Element Loss Of Cooling Analysis) is the Industry Standard Toolset (IST) code developed by AECL to simulate the thermo-mechanical response of CANDU fuel elements to transient thermalhydraulics boundary conditions. A separate ELOCA driver program

  20. Linking CATHENA with other computer codes through a remote process

    International Nuclear Information System (INIS)

    Vasic, A.; Hanna, B.N.; Waddington, G.M.; Sabourin, G.; Girard, R.

    2005-01-01

    'Full text:' CATHENA (Canadian Algorithm for THErmalhydraulic Network Analysis) is a computer code developed by Atomic Energy of Canada Limited (AECL). The code uses a transient, one-dimensional, two-fluid representation of two-phase flow in piping networks. CATHENA is used primarily for the analysis of postulated upset conditions in CANDU reactors; however, the code has found a wider range of applications. In the past, the CATHENA thermalhydraulics code included other specialized codes, i.e. ELOCA and the Point LEPreau CONtrol system (LEPCON) as callable subroutine libraries. The combined program was compiled and linked as a separately named code. This code organizational process is not suitable for independent development, maintenance, validation and version tracking of separate computer codes. The alternative solution to provide code development independence is to link CATHENA to other computer codes through a Parallel Virtual Machine (PVM) interface process. PVM is a public domain software package, developed by Oak Ridge National Laboratory and enables a heterogeneous collection of computers connected by a network to be used as a single large parallel machine. The PVM approach has been well accepted by the global computing community and has been used successfully for solving large-scale problems in science, industry, and business. Once development of the appropriate interface for linking independent codes through PVM is completed, future versions of component codes can be developed, distributed separately and coupled as needed by the user. This paper describes the coupling of CATHENA to the ELOCA-IST and the TROLG2 codes through a PVM remote process as an illustration of possible code connections. ELOCA (Element Loss Of Cooling Analysis) is the Industry Standard Toolset (IST) code developed by AECL to simulate the thermo-mechanical response of CANDU fuel elements to transient thermalhydraulics boundary conditions. A separate ELOCA driver program starts, ends

  1. Real-time Color Codes for Assessing Learning Process

    OpenAIRE

    Dzelzkalēja, L; Kapenieks, J

    2016-01-01

    Effective assessment is an important way for improving the learning process. There are existing guidelines for assessing the learning process, but they lack holistic digital knowledge society considerations. In this paper the authors propose a method for real-time evaluation of students’ learning process and, consequently, for quality evaluation of teaching materials both in the classroom and in the distance learning environment. The main idea of the proposed Color code method (CCM) is to use...

  2. Bilingual processing of ASL-English code-blends: The consequences of accessing two lexical representations simultaneously

    OpenAIRE

    Emmorey, Karen; Petrich, Jennifer; Gollan, Tamar H.

    2012-01-01

    Bilinguals who are fluent in American Sign Language (ASL) and English often produce code-blends - simultaneously articulating a sign and a word while conversing with other ASL-English bilinguals. To investigate the cognitive mechanisms underlying code-blend processing, we compared picture-naming times (Experiment 1) and semantic categorization times (Experiment 2) for code-blends versus ASL signs and English words produced alone. In production, code-blending did not slow lexical retrieval for...

  3. Use of NESTLE computer code for NPP transition process analysis

    International Nuclear Information System (INIS)

    Gal'chenko, V.V.

    2001-01-01

    A newly created WWER-440 reactor model with use NESTLE code is discussed. Results of 'fast' and 'slow' transition processes based on it are presented. This model was developed for Rovno NPP reactor and it can be used also for WWER-1000 reactor in Zaporozhe NPP

  4. SCAMPI: A code package for cross-section processing

    International Nuclear Information System (INIS)

    Parks, C.V.; Petrie, L.M.; Bowman, S.M.; Broadhead, B.L.; Greene, N.M.; White, J.E.

    1996-01-01

    The SCAMPI code package consists of a set of SCALE and AMPX modules that have been assembled to facilitate user needs for preparation of problem-specific, multigroup cross-section libraries. The function of each module contained in the SCANTI code package is discussed, along with illustrations of their use in practical analyses. Ideas are presented for future work that can enable one-step processing from a fine-group, problem-independent library to a broad-group, problem-specific library ready for a shielding analysis

  5. SCAMPI: A code package for cross-section processing

    Energy Technology Data Exchange (ETDEWEB)

    Parks, C.V.; Petrie, L.M.; Bowman, S.M.; Broadhead, B.L.; Greene, N.M.; White, J.E.

    1996-04-01

    The SCAMPI code package consists of a set of SCALE and AMPX modules that have been assembled to facilitate user needs for preparation of problem-specific, multigroup cross-section libraries. The function of each module contained in the SCANTI code package is discussed, along with illustrations of their use in practical analyses. Ideas are presented for future work that can enable one-step processing from a fine-group, problem-independent library to a broad-group, problem-specific library ready for a shielding analysis.

  6. Memory for pictures and words as a function of level of processing: Depth or dual coding?

    Science.gov (United States)

    D'Agostino, P R; O'Neill, B J; Paivio, A

    1977-03-01

    The experiment was designed to test differential predictions derived from dual-coding and depth-of-processing hypotheses. Subjects under incidental memory instructions free recalled a list of 36 test events, each presented twice. Within the list, an equal number of events were assigned to structural, phonemic, and semantic processing conditions. Separate groups of subjects were tested with a list of pictures, concrete words, or abstract words. Results indicated that retention of concrete words increased as a direct function of the processing-task variable (structural memory performance. These data provided strong support for the dual-coding model.

  7. Single-Trial Evoked Potential Estimating Based on Sparse Coding under Impulsive Noise Environment

    Directory of Open Access Journals (Sweden)

    Nannan Yu

    2018-01-01

    Full Text Available Estimating single-trial evoked potentials (EPs corrupted by the spontaneous electroencephalogram (EEG can be regarded as signal denoising problem. Sparse coding has significant success in signal denoising and EPs have been proven to have strong sparsity over an appropriate dictionary. In sparse coding, the noise generally is considered to be a Gaussian random process. However, some studies have shown that the background noise in EPs may present an impulsive characteristic which is far from Gaussian but suitable to be modeled by the α-stable distribution 1<α≤2. Consequently, the performances of general sparse coding will degrade or even fail. In view of this, we present a new sparse coding algorithm using p-norm optimization in single-trial EPs estimating. The algorithm can track the underlying EPs corrupted by α-stable distribution noise, trial-by-trial, without the need to estimate the α value. Simulations and experiments on human visual evoked potentials and event-related potentials are carried out to examine the performance of the proposed approach. Experimental results show that the proposed method is effective in estimating single-trial EPs under impulsive noise environment.

  8. A Processing Approach to the Dual Coding Hypothesis

    Science.gov (United States)

    Kosslyn, Stephen M.; And Others

    1976-01-01

    Investigates whether imagery and verbal encoding use different processing mechanisms and attempts to discover whether the processes underlying the use of imagery to retain words are also involved in like-modality perception. (Author/RK)

  9. On transform coding tools under development for VP10

    Science.gov (United States)

    Parker, Sarah; Chen, Yue; Han, Jingning; Liu, Zoe; Mukherjee, Debargha; Su, Hui; Wang, Yongzhe; Bankoski, Jim; Li, Shunyao

    2016-09-01

    Google started the WebM Project in 2010 to develop open source, royaltyfree video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec, VP10, that achieves at least a generational improvement in coding efficiency over VP9. Starting from VP9, a set of new experimental coding tools have already been added to VP10 to achieve decent coding gains. Subsequently, Google joined a consortium of major tech companies called the Alliance for Open Media to jointly develop a new codec AV1. As a result, the VP10 effort is largely expected to merge with AV1. In this paper, we focus primarily on new tools in VP10 that improve coding of the prediction residue using transform coding techniques. Specifically, we describe tools that increase the flexibility of available transforms, allowing the codec to handle a more diverse range or residue structures. Results are presented on a standard test set.

  10. Channel modeling, signal processing and coding for perpendicular magnetic recording

    Science.gov (United States)

    Wu, Zheng

    With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by

  11. Development and preliminary validation of flux map processing code MAPLE

    International Nuclear Information System (INIS)

    Li Wenhuai; Zhang Xiangju; Dang Zhen; Chen Ming'an; Lu Haoliang; Li Jinggang; Wu Yuanbao

    2013-01-01

    The self-reliant flux map processing code MAPLE was developed by China General Nuclear Power Corporation (CGN). Weight coefficient method (WCM), polynomial expand method (PEM) and thin plane spline (TPS) method were applied to fit the deviation between measured and predicted detector signal results for two-dimensional radial plane, to interpolate or extrapolate the non-instrumented location deviation. Comparison of results in the test cases shows that the TPS method can better capture the information of curved fitting lines than the other methods. The measured flux map data of the Lingao Nuclear Power Plant were processed using MAPLE as validation test cases, combined with SMART code. Validation results show that the calculation results of MAPLE are reasonable and satisfied. (authors)

  12. Enhancement of the SPARC90 code to pool scrubbing events under jet injection regime

    Energy Technology Data Exchange (ETDEWEB)

    Berna, C., E-mail: ceberes@iie.upv.es [Instituto de Ingeniería Energética, Universitat Politècnica de València (UPV), Camino de Vera 14, 46022 Valencia (Spain); Escrivá, A.; Muñoz-Cobo, J.L. [Instituto de Ingeniería Energética, Universitat Politècnica de València (UPV), Camino de Vera 14, 46022 Valencia (Spain); Herranz, L.E., E-mail: luisen.herranz@ciemat.es [Unit of Nuclear Safety Research Division of Nuclear Fission, CIEMAT, Avda. Complutense 22, 28040 Madrid (Spain)

    2016-04-15

    Highlights: • Review of the most recent literature concerning submerged jets. • Emphasize all variables and processes occurring along the jet region. • Highlight the gaps of knowledge still existing related to submerged jets. • Enhancement of SPARC90-Jet to estimate aerosol removal under jet injection regime. • Validation of the SPARC90-Jet results against pool scrubbing experimental data. - Abstract: Submerged gaseous jets may have an outstanding relevance in many industrial processes and may be of particular significance in severe nuclear accident scenarios, like in the Fukushima accident. Even though pool scrubbing has been traditionally associated with low injection velocities, there are a number of potential scenarios in which fission product trapping in aqueous ponds might also occur under jet injection regime (like SGTR meltdown sequences in PWRs and SBO ones in BWRs). The SPARC90 code was developed to determine the fission product trapping in pools during severe accidents. The code assumes that carrier gas arrives at the water ponds at low or moderate velocities and it forms a big bubble that eventually detaches from the injection pipe. However, particle laden gases may enter the water at very high velocities resulting in a submerged gas jet instead. This work presents the fundamentals, major hypotheses and changes introduced into the code in order to estimate particle removal during gas injection in pools under the jet regime (SPARC90-Jet). A simplified and reliable approach to submerged jet hydrodynamics has been implemented on the basis of updated equations for jet hydrodynamics and aerosol removal, so that gas–liquid and droplet-particles interactions are described. The code modifications have been validated as far as possible. However, no suitable hydrodynamic tests have been found in the literature, so that an indirect validation has been conducted through comparisons against data from pool scrubbing experiments. Besides, this validation

  13. Enhancement of the SPARC90 code to pool scrubbing events under jet injection regime

    International Nuclear Information System (INIS)

    Berna, C.; Escrivá, A.; Muñoz-Cobo, J.L.; Herranz, L.E.

    2016-01-01

    Highlights: • Review of the most recent literature concerning submerged jets. • Emphasize all variables and processes occurring along the jet region. • Highlight the gaps of knowledge still existing related to submerged jets. • Enhancement of SPARC90-Jet to estimate aerosol removal under jet injection regime. • Validation of the SPARC90-Jet results against pool scrubbing experimental data. - Abstract: Submerged gaseous jets may have an outstanding relevance in many industrial processes and may be of particular significance in severe nuclear accident scenarios, like in the Fukushima accident. Even though pool scrubbing has been traditionally associated with low injection velocities, there are a number of potential scenarios in which fission product trapping in aqueous ponds might also occur under jet injection regime (like SGTR meltdown sequences in PWRs and SBO ones in BWRs). The SPARC90 code was developed to determine the fission product trapping in pools during severe accidents. The code assumes that carrier gas arrives at the water ponds at low or moderate velocities and it forms a big bubble that eventually detaches from the injection pipe. However, particle laden gases may enter the water at very high velocities resulting in a submerged gas jet instead. This work presents the fundamentals, major hypotheses and changes introduced into the code in order to estimate particle removal during gas injection in pools under the jet regime (SPARC90-Jet). A simplified and reliable approach to submerged jet hydrodynamics has been implemented on the basis of updated equations for jet hydrodynamics and aerosol removal, so that gas–liquid and droplet-particles interactions are described. The code modifications have been validated as far as possible. However, no suitable hydrodynamic tests have been found in the literature, so that an indirect validation has been conducted through comparisons against data from pool scrubbing experiments. Besides, this validation

  14. Performance analysis of linear codes under maximum-likelihood decoding: a tutorial

    National Research Council Canada - National Science Library

    Sason, Igal; Shamai, Shlomo

    2006-01-01

    ..., upper and lower bounds on the error probability of linear codes under ML decoding are surveyed and applied to codes and ensembles of codes on graphs. For upper bounds, we discuss various bounds where focus is put on Gallager bounding techniques and their relation to a variety of other reported bounds. Within the class of lower bounds, we ad...

  15. THE LEGAL STATUS OF COMPANIES UNDER THE NEW CIVIL CODE

    Directory of Open Access Journals (Sweden)

    Lucian Bernd SĂULEANU

    2017-10-01

    Full Text Available The new Civil Code sets provisions regarding the liability of shareholders, organization and functioning of legal entity, annulment of documents issued by the management bodies of the legal entity, company contract, regime of contributions, company types, simple partnership, unlimited, simple limited partnership, with limited liability, joint stock, partnership limited by shares, cooperatives, other type of company.

  16. Study on the properties of infrared wavefront coding athermal system under several typical temperature gradient distributions

    Science.gov (United States)

    Cai, Huai-yu; Dong, Xiao-tong; Zhu, Meng; Huang, Zhan-hua

    2018-01-01

    Wavefront coding for athermal technique can effectively ensure the stability of the optical system imaging in large temperature range, as well as the advantages of compact structure and low cost. Using simulation method to analyze the properties such as PSF and MTF of wavefront coding athermal system under several typical temperature gradient distributions has directive function to characterize the working state of non-ideal temperature environment, and can effectively realize the system design indicators as well. In this paper, we utilize the interoperability of data between Solidworks and ZEMAX to simplify the traditional process of structure/thermal/optical integrated analysis. Besides, we design and build the optical model and corresponding mechanical model of the infrared imaging wavefront coding athermal system. The axial and radial temperature gradients of different degrees are applied to the whole system by using SolidWorks software, thus the changes of curvature, refractive index and the distance between the lenses are obtained. Then, we import the deformation model to ZEMAX for ray tracing, and obtain the changes of PSF and MTF in optical system. Finally, we discuss and evaluate the consistency of the PSF (MTF) of the wavefront coding athermal system and the image restorability, which provides the basis and reference for the optimal design of the wavefront coding athermal system. The results show that the adaptability of single material infrared wavefront coding athermal system to axial temperature gradient can reach the upper limit of temperature fluctuation of 60°C, which is much higher than that of radial temperature gradient.

  17. The role of the PIRT process in identifying code improvements and executing code development

    International Nuclear Information System (INIS)

    Wilson, G.E.; Boyack, B.E.

    1997-01-01

    In September 1988, the USNRC issued a revised ECCS rule for light water reactors that allows, as an option, the use of best estimate (BE) plus uncertainty methods in safety analysis. The key feature of this licensing option relates to quantification of the uncertainty in the determination that an NPP has a low probability of violating the safety criteria specified in 10 CFR 50. To support the 1988 licensing revision, the USNRC and its contractors developed the CSAU evaluation methodology to demonstrate the feasibility of the BE plus uncertainty approach. The PIRT process, Step 3 in the CSAU methodology, was originally formulated to support the BE plus uncertainty licensing option as executed in the CSAU approach to safety analysis. Subsequent work has shown the PIRT process to be a much more powerful tool than conceived in its original form. Through further development and application, the PIRT process has shown itself to be a robust means to establish safety analysis computer code phenomenological requirements in their order of importance to such analyses. Used early in research directed toward these objectives, PIRT results also provide the technical basis and cost effective organization for new experimental programs needed to improve the safety analysis codes for new applications. The primary purpose of this paper is to describe the generic PIRT process, including typical and common illustrations from prior applications. The secondary objective is to provide guidance to future applications of the process to help them focus, in a graded approach, on systems, components, processes and phenomena that have been common in several prior applications

  18. The role of the PIRT process in identifying code improvements and executing code development

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, G.E. [Idaho National Engineering Lab., Idaho Falls, ID (United States); Boyack, B.E. [Los Alamos National Lab., NM (United States)

    1997-07-01

    In September 1988, the USNRC issued a revised ECCS rule for light water reactors that allows, as an option, the use of best estimate (BE) plus uncertainty methods in safety analysis. The key feature of this licensing option relates to quantification of the uncertainty in the determination that an NPP has a {open_quotes}low{close_quotes} probability of violating the safety criteria specified in 10 CFR 50. To support the 1988 licensing revision, the USNRC and its contractors developed the CSAU evaluation methodology to demonstrate the feasibility of the BE plus uncertainty approach. The PIRT process, Step 3 in the CSAU methodology, was originally formulated to support the BE plus uncertainty licensing option as executed in the CSAU approach to safety analysis. Subsequent work has shown the PIRT process to be a much more powerful tool than conceived in its original form. Through further development and application, the PIRT process has shown itself to be a robust means to establish safety analysis computer code phenomenological requirements in their order of importance to such analyses. Used early in research directed toward these objectives, PIRT results also provide the technical basis and cost effective organization for new experimental programs needed to improve the safety analysis codes for new applications. The primary purpose of this paper is to describe the generic PIRT process, including typical and common illustrations from prior applications. The secondary objective is to provide guidance to future applications of the process to help them focus, in a graded approach, on systems, components, processes and phenomena that have been common in several prior applications.

  19. The mining code under the light of shale gas

    International Nuclear Information System (INIS)

    Dubreuil, Thomas; Romi, Raphael

    2013-01-01

    The authors analyze the evolution and challenges of the French legal context, notably the French mining code, in relationship with the emergence of the issue of shale gas exploitation. They first draw lessons from the law published in 2011 which focused on the use of the hydraulic fracturing technique to forbid any non conventional hydrocarbon exploitation. They comment the content of different legal or official texts which have been published since then, and which notably evoked the use of other exploration and exploitation techniques and weakened the 2011 law. In a second part, they discuss political issues such as the influence of the European framework on the energy policy, and the integration of mining, energy and land planning policies which puts the mining code into question

  20. Methods for the development of large computer codes under LTSS

    International Nuclear Information System (INIS)

    Sicilian, J.M.

    1977-06-01

    TRAC is a large computer code being developed by Group Q-6 for the analysis of the transient thermal hydraulic behavior of light-water nuclear reactors. A system designed to assist the development of TRAC is described. The system consists of a central HYDRA dataset, R6LIB, containing files used in the development of TRAC, and a file maintenance program, HORSE, which facilitates the use of this dataset

  1. Reliability of Broadcast Communications Under Sparse Random Linear Network Coding

    OpenAIRE

    Brown, Suzie; Johnson, Oliver; Tassi, Andrea

    2018-01-01

    Ultra-reliable Point-to-Multipoint (PtM) communications are expected to become pivotal in networks offering future dependable services for smart cities. In this regard, sparse Random Linear Network Coding (RLNC) techniques have been widely employed to provide an efficient way to improve the reliability of broadcast and multicast data streams. This paper addresses the pressing concern of providing a tight approximation to the probability of a user recovering a data stream protected by this kin...

  2. Coded ultrasound for blood flow estimation using subband processing

    DEFF Research Database (Denmark)

    Gran, Fredrik; Udesen, Jesper; Nielsen, Michael bachmann

    2007-01-01

    This paper further investigates the use of coded excitation for blood flow estimation in medical ultrasound. Traditional autocorrelation estimators use narrow-band excitation signals to provide sufficient signal-to-noise-ratio (SNR) and velocity estimation performance. In this paper, broadband...... coded signals are used to increase SNR, followed by sub-band processing. The received broadband signal, is filtered using a set of narrow-band filters. Estimating the velocity in each of the bands and averaging the results yields better performance compared to what would be possible when transmitting...... a narrow-band pulse directly. Also, the spatial resolution of the narrow-band pulse would be too poor for brightness-mode (B-mode) imaging and additional transmissions would be required to update the B-mode image. In the described approach, there is no need for additional transmissions, because...

  3. Effect of difference between group constants processed by codes TIMS and ETOX on integral quantities

    International Nuclear Information System (INIS)

    Takano, Hideki; Ishiguro, Yukio; Matsui, Yasushi.

    1978-06-01

    Group constants of 235 U, 238 U, 239 Pu, 240 Pu and 241 Pu have been produced with the processing code TIMS using the evaluated nuclear data of JENDL-1. The temperature and composition dependent self-shielding factors have been calculated for the two cases with and without considering mutual interference resonant nuclei. By using the group constants set produced by the TIMS code, the integral quantities, i.e. multiplication factor, Na-void reactivity effect and Doppler reactivity effect, are calculated and compared with those calculated with the use of the cross sections set produced by the ETOX code to evaluate accuracy of the approximate calculation method in ETOX. There is much difference in self-shielding factor in each energy group between the two codes. For the fast reactor assemblies under study, however, the integral quantities calculated with these two sets are in good agreement with each other, because of eventual cancelation of errors. (auth.)

  4. Developing improved MD codes for understanding processive cellulases

    International Nuclear Information System (INIS)

    Crowley, M F; Nimlos, M R; Himmel, M E; Uberbacher, E C; Iii, C L Brooks; Walker, R C

    2008-01-01

    The mechanism of action of cellulose-degrading enzymes is illuminated through a multidisciplinary collaboration that uses molecular dynamics (MD) simulations and expands the capabilities of MD codes to allow simulations of enzymes and substrates on petascale computational facilities. There is a class of glycoside hydrolase enzymes called cellulases that are thought to decrystallize and processively depolymerize cellulose using biochemical processes that are largely not understood. Understanding the mechanisms involved and improving the efficiency of this hydrolysis process through computational models and protein engineering presents a compelling grand challenge. A detailed understanding of cellulose structure, dynamics and enzyme function at the molecular level is required to direct protein engineers to the right modifications or to understand if natural thermodynamic or kinetic limits are in play. Much can be learned about processivity by conducting carefully designed molecular dynamics (MD) simulations of the binding and catalytic domains of cellulases with various substrate configurations, solvation models and thermodynamic protocols. Most of these numerical experiments, however, will require significant modification of existing code and algorithms in order to efficiently use current (terascale) and future (petascale) hardware to the degree of parallelism necessary to simulate a system of the size proposed here. This work will develop MD codes that can efficiently use terascale and petascale systems, not just for simple classical MD simulations, but also for more advanced methods, including umbrella sampling with complex restraints and reaction coordinates, transition path sampling, steered molecular dynamics, and quantum mechanical/molecular mechanical simulations of systems the size of cellulose degrading enzymes acting on cellulose

  5. Streamline processing of discrete nuclear spectra by means of authoregularized iteration process (the KOLOBOK code)

    International Nuclear Information System (INIS)

    Gadzhokov, V.; Penev, I.; Aleksandrov, L.

    1979-01-01

    A brief description of the KOLOBOK computer code designed for streamline processing of discrete nuclear spectra with symmetric Gaussian shape of the single line on computers of the ES series, models 1020 and above, is given. The program solves the stream of discrete-spectrometry generated nonlinear problems by means of authoregularized iteration process. The Fortran-4 text of the code is reported in an Appendix

  6. SSYST. A code system to analyze LWR fuel rod behavior under accident conditions

    International Nuclear Information System (INIS)

    Gulden, W.; Meyder, R.; Borgwaldt, H.

    1982-01-01

    SSYST (Safety SYSTem) is a modular system to analyze the behavior of light water reactor fuel rods and fuel rod simulators under accident conditions. It has been developed in close cooperation between Kernforschungszentrum Karlsruhe (KfK) and the Institut fuer Kerntechnik und Energiewandlung (IKE), University Stuttgart, under contract of Projekt Nukleare Sicherheit (PNS) at KfK. Although originally aimed at single rod analysis, features are available to calculate effects such as blockage ratios of bundles and wholes cores. A number of inpile and out-of-pile experiments were used to assess the system. Main differences versus codes like FRAP-T with similar applications are (1) an open-ended modular code organisation, (2) availability of modules of different sophistication levels for the same physical processes, and (3) a preference for simple models, wherever possible. The first feature makes SSYST a very flexible tool, easily adapted to changing requirements; the second enables the user to select computational models adequate to the significance of the physical process. This leads together with the third feature to short execution times. The analysis of transient rod behavior under LOCA boundary conditions e.g. takes 2 mins cpu-time (IBM-3033), so that extensive parametric studies become possible

  7. Code of Conduct for Gas Marketers : rule made under part 3 of the Ontario Energy Board Act, 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-02

    Text of the code of conduct for gas marketers in Ontario is presented. This code sets the minimum standards under which a gas marketer may sell or offer to sell gas to a low-volume consumer, or act as an agent or broker with respect to the sale of gas. The document describes the standards and principles regarding: (1) fair marketing practices, (2) identification, (3) information to be maintained by a gas marketer, (4) confidentiality of consumer information, (5) conditions in offers, (6) contracts, (7) contract renewals, (8) assignment, sale and transfer contracts, (9) independent arms-length consumer complaints resolution process, and (10) penalties for breach of this code.

  8. Code of Conduct for Gas Marketers : rule made under part 3 of the Ontario Energy Board Act, 1998

    International Nuclear Information System (INIS)

    1999-01-01

    Text of the code of conduct for gas marketers in Ontario is presented. This code sets the minimum standards under which a gas marketer may sell or offer to sell gas to a low-volume consumer, or act as an agent or broker with respect to the sale of gas. The document describes the standards and principles regarding: (1) fair marketing practices, (2) identification, (3) information to be maintained by a gas marketer, (4) confidentiality of consumer information, (5) conditions in offers, (6) contracts, (7) contract renewals, (8) assignment, sale and transfer contracts, (9) independent arms-length consumer complaints resolution process, and (10) penalties for breach of this code

  9. Updated Covariance Processing Capabilities in the AMPX Code System

    International Nuclear Information System (INIS)

    Wiarda, Dorothea; Dunn, Michael E.

    2007-01-01

    A concerted effort is in progress within the nuclear data community to provide new cross-section covariance data evaluations to support sensitivity/uncertainty analyses of fissionable systems. The objective of this work is to update processing capabilities of the AMPX library to process the latest Evaluated Nuclear Data File (ENDF)/B formats to generate covariance data libraries for radiation transport software such as SCALE. The module PUFF-IV was updated to allow processing of new ENDF covariance formats in the resolved resonance region. In the resolved resonance region, covariance matrices are given in terms of resonance parameters, which need to be processed into covariance matrices with respect to the group-averaged cross-section data. The parameter covariance matrix can be quite large if the evaluation has many resonances. The PUFF-IV code has recently been used to process an evaluation of 235U, which was prepared in collaboration between Oak Ridge National Laboratory and Los Alamos National Laboratory.

  10. Calculation code of mass and heat transfer in a pulsed column for Purex process

    International Nuclear Information System (INIS)

    Tsukada, Takeshi; Takahashi, Keiki

    1993-01-01

    A calculation code for extraction behavior analysis in a pulsed column employed at an extraction process of a reprocessing plant was developed. This code was also combined with our previously developed calculation code for axial temperature profiles in a pulsed column. The one-dimensional dispersion model was employed for both of the extraction behavior analysis and the axial temperature profile analysis. The reported values of the fluid characteristics coefficient, the transfer coefficient and the diffusivities in the pulsed column were used. The calculated concentration profiles of HNO 3 , U and Pu for the steady state have a good agreement with the reported experimental results. The concentration and temperature profiles were calculated under the operation conditions which induce the abnormal U extraction behavior, i.e. U extraction zone is moved to the bottom of the column. Thought there is slight difference between calculated and experimental value, it is appeared that our developed code can be applied to the simulation under the normal operation condition and the relatively slowly transient condition. Pu accumulation phenomena was analyzed with this code and the accumulation tendency is similar to the reported analysis results. (author)

  11. Calculation code PULCO for Purex process in pulsed column

    International Nuclear Information System (INIS)

    Gonda, Kozo; Matsuda, Teruo

    1982-03-01

    The calculation code PULCO, which can simulate the Purex process using a pulsed column as an extractor, has been developed. The PULCO is based on the fundamental concept of mass transfer that the mass transfer within a pulsed column occurs through the interface of liquid drops and continuous phase fluid, and is the calculation code different from conventional ones, by which various phenomena such as the generation of liquid drops, their rising and falling, and the unification of liquid drops actually occurring in a pulsed column are exactly reflected and can be correctly simulated. In the PULCO, the actually measured values of the fundamental quantities representing the extraction behavior of liquid drops in a pulsed column are incorporated, such as the mass transfer coefficient of each component, the diameter and velocity of liquid drops in a pulsed column, the holdup of dispersed phase, and axial turbulent flow diffusion coefficient. The verification of the results calculated with the PULCO was carried out by installing a pulsed column of 50 mm inside diameter and 2 m length with 40 plate stage in a glove box for unirradiated uranium-plutonium mixed system. The results of the calculation and test were in good agreement, and the validity of the PULCO was confirmed. (Kako, I.)

  12. A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage Scheme

    Directory of Open Access Journals (Sweden)

    Winch Peter J

    2011-04-01

    Full Text Available Abstract Background In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Methods Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large, location (urban/rural, and type (public/private. Results Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1 Discharge Summarization, 2 Completeness Checking, 3 Diagnosis and Procedure Coding, 4 Code Checking, 5 Relative Weight Challenging, 6 Coding Report, and 7 Internal Audit. The hospital coding practice can be affected by at least five main factors: 1 Internal Dynamics, 2 Management Context, 3 Financial Dependency, 4 Resource and Capacity, and 5 External Factors. Conclusions Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors.

  13. A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage scheme.

    Science.gov (United States)

    Pongpirul, Krit; Walker, Damian G; Winch, Peter J; Robinson, Courtland

    2011-04-08

    In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large), location (urban/rural), and type (public/private). Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1) Discharge Summarization, 2) Completeness Checking, 3) Diagnosis and Procedure Coding, 4) Code Checking, 5) Relative Weight Challenging, 6) Coding Report, and 7) Internal Audit. The hospital coding practice can be affected by at least five main factors: 1) Internal Dynamics, 2) Management Context, 3) Financial Dependency, 4) Resource and Capacity, and 5) External Factors. Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors.

  14. Single integrated device for optical CDMA code processing in dual-code environment.

    Science.gov (United States)

    Huang, Yue-Kai; Glesk, Ivan; Greiner, Christoph M; Iazkov, Dmitri; Mossberg, Thomas W; Wang, Ting; Prucnal, Paul R

    2007-06-11

    We report on the design, fabrication and performance of a matching integrated optical CDMA encoder-decoder pair based on holographic Bragg reflector technology. Simultaneous encoding/decoding operation of two multiple wavelength-hopping time-spreading codes was successfully demonstrated and shown to support two error-free OCDMA links at OC-24. A double-pass scheme was employed in the devices to enable the use of longer code length.

  15. MHD code using multi graphical processing units: SMAUG+

    Science.gov (United States)

    Gyenge, N.; Griffiths, M. K.; Erdélyi, R.

    2018-01-01

    This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.

  16. UNICOS CPC6: automated code generation for process control applications

    International Nuclear Information System (INIS)

    Fernandez Adiego, B.; Blanco Vinuela, E.; Prieto Barreiro, I.

    2012-01-01

    The Continuous Process Control package (CPC) is one of the components of the CERN Unified Industrial Control System framework (UNICOS). As a part of this framework, UNICOS-CPC provides a well defined library of device types, a methodology and a set of tools to design and implement industrial control applications. The new CPC version uses the software factory UNICOS Application Builder (UAB) to develop CPC applications. The CPC component is composed of several platform oriented plug-ins (PLCs and SCADA) describing the structure and the format of the generated code. It uses a resource package where both, the library of device types and the generated file syntax, are defined. The UAB core is the generic part of this software, it discovers and calls dynamically the different plug-ins and provides the required common services. In this paper the UNICOS CPC6 package is introduced. It is composed of several plug-ins: the Instance generator and the Logic generator for both, Siemens and Schneider PLCs, the SCADA generator (based on PVSS) and the CPC wizard as a dedicated plug-in created to provide the user a friendly GUI (Graphical User Interface). A tool called UAB Bootstrap will manage the different UAB components, like CPC, and its dependencies with the resource packages. This tool guides the control system developer during the installation, update and execution of the UAB components. (authors)

  17. UNICOS CPC6: Automated Code Generation for Process Control Applications

    CERN Document Server

    Fernandez Adiego, B; Prieto Barreiro, I

    2011-01-01

    The Continuous Process Control package (CPC) is one of the components of the CERN Unified Industrial Control System framework (UNICOS) [1]. As a part of this framework, UNICOS-CPC provides a well defined library of device types, amethodology and a set of tools to design and implement industrial control applications. The new CPC version uses the software factory UNICOS Application Builder (UAB) [2] to develop CPC applications. The CPC component is composed of several platform oriented plugins PLCs and SCADA) describing the structure and the format of the generated code. It uses a resource package where both, the library of device types and the generated file syntax, are defined. The UAB core is the generic part of this software, it discovers and calls dynamically the different plug-ins and provides the required common services. In this paper the UNICOS CPC6 package is introduced. It is composed of several plug-ins: the Instance generator and the Logic generator for both, Siemens and Schneider PLCs, the SCADA g...

  18. Adaptive under relaxation factor of MATRA code for the efficient whole core analysis

    International Nuclear Information System (INIS)

    Kwon, Hyuk; Kim, S. J.; Seo, K. W.; Hwang, D. H.

    2013-01-01

    Such nonlinearities are handled in MATRA code using outer iteration with Picard scheme. The Picard scheme involves successive updating of the coefficient matrix based on the previously calculated values. The scheme is a simple and effective method for the nonlinear problem but the effectiveness greatly depends on the under-relaxing capability. Accuracy and speed of calculation are very sensitively dependent on the under-relaxation factor in outer-iteration updating the axial mass flow using the continuity equation. The under-relaxation factor in MATRA is generally utilized with a fixed value that is empirically determined. Adapting the under-relaxation factor to the outer iteration is expected to improve the calculation effectiveness of MATRA code rather than calculation with the fixed under-relaxation factor. The present study describes the implementation of adaptive under-relaxation within the subchannel code MATRA. Picard iterations with adaptive under-relaxation can accelerate the convergence for mass conservation in subchannel code MATRA. The most efficient approach for adaptive under relaxation appears to be very problem dependent

  19. PERFORMANCE ANALYSIS OF OPTICAL CDMA SYSTEM USING VC CODE FAMILY UNDER VARIOUS OPTICAL PARAMETERS

    Directory of Open Access Journals (Sweden)

    HASSAN YOUSIF AHMED

    2012-06-01

    Full Text Available The intent of this paper is to study the performance of spectral-amplitude coding optical code-division multiple-access (OCDMA systems using Vector Combinatorial (VC code under various optical parameters. This code can be constructed by an algebraic way based on Euclidian vectors for any positive integer number. One of the important properties of this code is that the maximum cross-correlation is always one which means that multi-user interference (MUI and phase induced intensity noise are reduced. Transmitter and receiver structures based on unchirped fiber Bragg grating (FBGs using VC code and taking into account effects of the intensity, shot and thermal noise sources is demonstrated. The impact of the fiber distance effects on bit error rate (BER is reported using a commercial optical systems simulator, virtual photonic instrument, VPITM. The VC code is compared mathematically with reported codes which use similar techniques. We analyzed and characterized the fiber link, received power, BER and channel spacing. The performance and optimization of VC code in SAC-OCDMA system is reported. By comparing the theoretical and simulation results taken from VPITM, we have demonstrated that, for a high number of users, even if data rate is higher, the effective power source is adequate when the VC is used. Also it is found that as the channel spacing width goes from very narrow to wider, the BER decreases, best performance occurs at a spacing bandwidth between 0.8 and 1 nm. We have shown that the SAC system utilizing VC code significantly improves the performance compared with the reported codes.

  20. Processing of individual items during ensemble coding of facial expressions

    Directory of Open Access Journals (Sweden)

    Huiyun Li

    2016-09-01

    Full Text Available There is growing evidence that human observers are able to extract the mean emotion or other type of information from a set of faces. The most intriguing aspect of this phenomenon is that observers often fail to identify or form a representation for individual faces in a face set. However, most of these results were based on judgments under limited processing resource. We examined a wider range of exposure time and observed how the relationship between the extraction of a mean and representation of individual facial expressions would change. The results showed that with an exposure time of 50 milliseconds for the faces, observers were more sensitive to mean representation over individual representation, replicating the typical findings in the literature. With longer exposure time, however, observers were able to extract both individual and mean representation more accurately. Furthermore, diffusion model analysis revealed that the mean representation is also more prone to suffer from the noise accumulated in redundant processing time and leads to a more conservative decision bias, whereas individual representations seem more resistant to this noise. Results suggest that the encoding of emotional information from multiple faces may take two forms: single face processing and crowd face processing.

  1. 76 FR 37034 - Certain Employee Remuneration in Excess of $1,000,000 Under Internal Revenue Code Section 162(m)

    Science.gov (United States)

    2011-06-24

    ... Certain Employee Remuneration in Excess of $1,000,000 Under Internal Revenue Code Section 162(m) AGENCY... remuneration in excess of $1,000,000 under the Internal Revenue Code (Code). The proposed regulations clarify... stock options, it is intended that the directors may retain discretion as to the exact number of options...

  2. Annotating long intergenic non-coding RNAs under artificial selection during chicken domestication.

    Science.gov (United States)

    Wang, Yun-Mei; Xu, Hai-Bo; Wang, Ming-Shan; Otecko, Newton Otieno; Ye, Ling-Qun; Wu, Dong-Dong; Zhang, Ya-Ping

    2017-08-15

    Numerous biological functions of long intergenic non-coding RNAs (lincRNAs) have been identified. However, the contribution of lincRNAs to the domestication process has remained elusive. Following domestication from their wild ancestors, animals display substantial changes in many phenotypic traits. Therefore, it is possible that diverse molecular drivers play important roles in this process. We analyzed 821 transcriptomes in this study and annotated 4754 lincRNA genes in the chicken genome. Our population genomic analysis indicates that 419 lincRNAs potentially evolved during artificial selection related to the domestication of chicken, while a comparative transcriptomic analysis identified 68 lincRNAs that were differentially expressed under different conditions. We also found 47 lincRNAs linked to special phenotypes. Our study provides a comprehensive view of the genome-wide landscape of lincRNAs in chicken. This will promote a better understanding of the roles of lincRNAs in domestication, and the genetic mechanisms associated with the artificial selection of domestic animals.

  3. Mapping Saldana's Coding Methods onto the Literature Review Process

    Science.gov (United States)

    Onwuegbuzie, Anthony J.; Frels, Rebecca K.; Hwang, Eunjin

    2016-01-01

    Onwuegbuzie and Frels (2014) provided a step-by-step guide illustrating how discourse analysis can be used to analyze literature. However, more works of this type are needed to address the way that counselor researchers conduct literature reviews. Therefore, we present a typology for coding and analyzing information extracted for literature…

  4. A computer code simulating multistage chemical exchange column under wide range of operating conditions

    International Nuclear Information System (INIS)

    Yamanishi, Toshihiko; Okuno, Kenji

    1996-09-01

    A computer code has been developed to simulate a multistage CECE(Combined Electrolysis Chemical Exchange) column. The solution of basic equations can be found out by the Newton-Raphson method. The independent variables are the atom fractions of D and T in each stage for the case where H is dominant within the column. These variables are replaced by those of H and T under the condition that D is dominant. Some effective techniques have also been developed to get a set of solutions of the basic equations: a setting procedure of initial values of the independent variables; and a procedure for the convergence of the Newton-Raphson method. The computer code allows us to simulate the column behavior under a wide range of the operating conditions. Even for a severe case, where the dominant species changes along the column height, the code can give a set of solutions of the basic equations. (author)

  5. Asymmetric Spatial Processing Under Cognitive Load.

    Science.gov (United States)

    Naert, Lien; Bonato, Mario; Fias, Wim

    2018-01-01

    Spatial attention allows us to selectively process information within a certain location in space. Despite the vast literature on spatial attention, the effect of cognitive load on spatial processing is still not fully understood. In this study we added cognitive load to a spatial processing task, so as to see whether it would differentially impact upon the processing of visual information in the left versus the right hemispace. The main paradigm consisted of a detection task that was performed during the maintenance interval of a verbal working memory task. We found that increasing cognitive working memory load had a more negative impact on detecting targets presented on the left side compared to those on the right side. The strength of the load effect correlated with the strength of the interaction on an individual level. The implications of an asymmetric attentional bias with a relative disadvantage for the left (vs the right) hemispace under high verbal working memory (WM) load are discussed.

  6. Keeping a common bawdy house becomes a "serious offence" under Criminal Code.

    Science.gov (United States)

    2010-10-01

    New federal regulations targeting organized crime will make keeping a common bawdy house a "serious offence" under the Criminal Code. Sex work advocates reacted by calling the measure a serious step back that will undermine the protection of sex workers' human rights, safety, dignity and health.

  7. Convergence of macrostates under reproducible processes

    International Nuclear Information System (INIS)

    Rau, Jochen

    2010-01-01

    I show that whenever a system undergoes a reproducible macroscopic process the mutual distinguishability of macrostates, as measured by their relative entropy, diminishes. This extends the second law which regards only ordinary entropies, and hence only the distinguishability between macrostates and one specific reference state (equidistribution). The new result holds regardless of whether the process is linear or nonlinear. Its proof hinges on the monotonicity of quantum relative entropy under arbitrary coarse grainings, even those that cannot be represented by trace-preserving completely positive maps.

  8. Fuel corrosion processes under waste disposal conditions

    International Nuclear Information System (INIS)

    Shoesmith, D.W.

    2000-01-01

    The release of the majority of radionuclides from spent nuclear fuel under permanent disposal conditions will be controlled by the rate of dissolution of the UO 2 fuel matrix. In this manuscript the mechanism of the coupled anodic (fuel dissolution) and cathodic (oxidant reduction) reactions which constitute the overall fuel corrosion process is reviewed, and the many published observations on fuel corrosion under disposal conditions discussed. The primary emphasis is on summarizing the overall mechanistic behaviour and establishing the primary factors likely to control fuel corrosion. Included are discussions on the influence of various oxidants including radiolytic ones, pH, temperature, groundwater composition, and the formation of corrosion product deposits. The relevance of the data recorded on unirradiated UO 2 to the interpretation of spent fuel behaviour is included. Based on the review, the data used to develop fuel corrosion models under the conditions anticipated in Yucca Mountain (NV, USA) are evaluated

  9. The current status of cyanobacterial nomenclature under the "prokaryotic" and the "botanical" code.

    Science.gov (United States)

    Oren, Aharon; Ventura, Stefano

    2017-10-01

    Cyanobacterial taxonomy developed in the botanical world because Cyanobacteria/Cyanophyta have traditionally been identified as algae. However, they possess a prokaryotic cell structure, and phylogenetically they belong to the Bacteria. This caused nomenclature problems as the provisions of the International Code of Nomenclature for algae, fungi, and plants (ICN; the "Botanical Code") differ from those of the International Code of Nomenclature of Prokaryotes (ICNP; the "Prokaryotic Code"). While the ICN recognises names validly published under the ICNP, Article 45(1) of the ICN has not yet been reciprocated in the ICNP. Different solutions have been proposed to solve the current problems. In 2012 a Special Committee on the harmonisation of the nomenclature of Cyanobacteria was appointed, but its activity has been minimal. Two opposing proposals to regulate cyanobacterial nomenclature were recently submitted, one calling for deletion of the cyanobacteria from the groups of organisms whose nomenclature is regulated by the ICNP, the second to consistently apply the rules of the ICNP to all cyanobacteria. Following a general overview of the current status of cyanobacterial nomenclature under the two codes we present five case studies of genera for which nomenclatural aspects have been discussed in recent years: Microcystis, Planktothrix, Halothece, Gloeobacter and Nostoc.

  10. The status of simulation codes for extraction process using mixer-settler

    Energy Technology Data Exchange (ETDEWEB)

    Byeon, Kee Hoh; Lee, Eil Hee; Kwon, Seong Gil; Kim, Kwang Wook; Yang, Han Beom; Chung, Dong Yong; Lim, Jae Kwan; Shin, Hyun Kyoo; Kim, Soo Ho

    1999-10-01

    We have studied and analyzed the mixer-settler simulation codes such as three kinds of SEPHIS series, PUBG, and EXTRA.M, which is the most recently developed code. All of these are sufficiently satisfactory codes in the fields of process/device modeling, but it is necessary to formulate the accurate distribution data and chemical reaction mechanism for the aspect of accuracy and reliability. In the aspect of application to be the group separation process, the mixer-settler model of these codes have no problems, but the accumulation and formulation of partitioning and reaction equilibrium data of chemical elements used in group separation process is very important. (author)

  11. The "periodic table" of the genetic code: A new way to look at the code and the decoding process.

    Science.gov (United States)

    Komar, Anton A

    2016-01-01

    Henri Grosjean and Eric Westhof recently presented an information-rich, alternative view of the genetic code, which takes into account current knowledge of the decoding process, including the complex nature of interactions between mRNA, tRNA and rRNA that take place during protein synthesis on the ribosome, and it also better reflects the evolution of the code. The new asymmetrical circular genetic code has a number of advantages over the traditional codon table and the previous circular diagrams (with a symmetrical/clockwise arrangement of the U, C, A, G bases). Most importantly, all sequence co-variances can be visualized and explained based on the internal logic of the thermodynamics of codon-anticodon interactions.

  12. UNR. A code for processing unresolved resonance data for MCNP

    International Nuclear Information System (INIS)

    Hogenbirk, A.

    1994-09-01

    In neutron transport problems the correct treatment of self-shielding is important for those nuclei present in large concentrations. Monte Carlo calculations using continuous-energy cross section data, such as calculations with the code MCNP, offer the advantage that neutron transport is calculated in a very accurate way. Self-shielding in the resolved resonance region is taken into account exactly in MCNP. However, self-shielding in the unresolved resonance region can not be taken into account by MCNP, although the effect of it may be important in many applications. In this report a description is given of the computer code UNR. With this code problem-dependent cross section libraries can be produced for MCNP. In these libraries self-shielded cross section data in the unresolved resonance range are given, which are produced by NJOY-module UNRESR. It is noted, that the treatment for resonance self-shielding presented in this report is approximate. However, the current version of MCNP does not allow the use of probability tables, which would be a general solution. (orig.)

  13. 77 FR 17460 - Multistakeholder Process To Develop Consumer Data Privacy Codes of Conduct

    Science.gov (United States)

    2012-03-26

    ..., 2012, NTIA requested public comments on (1) which consumer data privacy issues should be the focus of.... 120214135-2203-02] RIN 0660-XA27 Multistakeholder Process To Develop Consumer Data Privacy Codes of Conduct... request for public comments on the multistakeholder process to develop consumer data privacy codes of...

  14. KUPOL-M code for simulation of the VVER's accident localization system under LOCA conditions

    International Nuclear Information System (INIS)

    Efanov, A.D.; Lukyanov, A.A.; Shangin, N.N.; Zajtsev, A.A.; Solov'ev, S.L.

    2004-01-01

    Computer code KUPOL-M is developed for analysis of thermodynamic parameters of medium within full pressure containment for NPPs with VVER under LOCA conditions. The analysis takes into account the effects of non-stationary heat-mass transfer of gas-drop mixture in the containment compartments with natural convection, volume and surface steam condensation in the presence of noncondensables, heat-mass exchange of the compartment atmosphere with water in the sumps. The operation of the main safety systems like a spray system, hydrogen catalytic recombiners, emergency core cooling pumps, valves and a fan system is simulated in KUPOL-M code. The main results of the code verification including the ones of the participation in ISP-47 International Standard Problem on containment thermal-hydraulics are presented. (author)

  15. RODSWELL: a computer code for the thermomechanical analysis of fuel rods under LOCA conditions

    International Nuclear Information System (INIS)

    Casadei, F.; Laval, H.; Donea, J.; Jones, P.M.; Colombo, A.

    1984-01-01

    The present report is the user's manual for the computer code RODSWELL developed at the JRC-Ispra for the thermomechanical analysis of LWR fuel rods under simulated loss-of-coolant accident (LOCA) conditions. The code calculates the variation in space and time of all significant fuel rod variables, including fuel, gap and cladding temperature, fuel and cladding deformation, cladding oxidation and rod internal pressure. The essential characteristics of the code are briefly outlined here. The model is particularly designed to perform a full thermal and mechanical analysis in both the azimuthal and radial directions. Thus, azimuthal temperature gradients arising from pellet eccentricity, flux tilt, arbitrary distribution of heat sources in the fuel and the cladding and azimuthal variation of coolant conditions can be treated. The code combines a transient 2-dimensional heat conduction code and a 1-dimentional mechanical model for the cladding deformation. The fuel rod is divided into a number of axial sections and a detailed thermomechanical analysis is performed within each section in radial and azimuthal directions. In the following sections, instructions are given for the definition of the data files and the semi-variable dimensions. Then follows a complete description of the input data. Finally, the restart option is described

  16. Role of Symbolic Coding and Rehearsal Processes in Observational Learning

    Science.gov (United States)

    Bandura, Albert; Jeffery, Robert W.

    1973-01-01

    Results were interpreted supporting a social learning view of observational learning that emphasizes contral processing of response information in the acquisition phase and motor reproduction and incentive processes in the overt enactment of what has been learned. (Author)

  17. Under-coding of secondary conditions in coded hospital health data: Impact of co-existing conditions, death status and number of codes in a record.

    Science.gov (United States)

    Peng, Mingkai; Southern, Danielle A; Williamson, Tyler; Quan, Hude

    2017-12-01

    This study examined the coding validity of hypertension, diabetes, obesity and depression related to the presence of their co-existing conditions, death status and the number of diagnosis codes in hospital discharge abstract database. We randomly selected 4007 discharge abstract database records from four teaching hospitals in Alberta, Canada and reviewed their charts to extract 31 conditions listed in Charlson and Elixhauser comorbidity indices. Conditions associated with the four study conditions were identified through multivariable logistic regression. Coding validity (i.e. sensitivity, positive predictive value) of the four conditions was related to the presence of their associated conditions. Sensitivity increased with increasing number of diagnosis code. Impact of death on coding validity is minimal. Coding validity of conditions is closely related to its clinical importance and complexity of patients' case mix. We recommend mandatory coding of certain secondary diagnosis to meet the need of health research based on administrative health data.

  18. Asymmetric Spatial Processing Under Cognitive Load

    Directory of Open Access Journals (Sweden)

    Lien Naert

    2018-04-01

    Full Text Available Spatial attention allows us to selectively process information within a certain location in space. Despite the vast literature on spatial attention, the effect of cognitive load on spatial processing is still not fully understood. In this study we added cognitive load to a spatial processing task, so as to see whether it would differentially impact upon the processing of visual information in the left versus the right hemispace. The main paradigm consisted of a detection task that was performed during the maintenance interval of a verbal working memory task. We found that increasing cognitive working memory load had a more negative impact on detecting targets presented on the left side compared to those on the right side. The strength of the load effect correlated with the strength of the interaction on an individual level. The implications of an asymmetric attentional bias with a relative disadvantage for the left (vs the right hemispace under high verbal working memory (WM load are discussed.

  19. ENDF/B Pre-Processing Codes: Implementing and testing on a Personal Computer

    International Nuclear Information System (INIS)

    McLaughlin, P.K.

    1987-05-01

    This document describes the contents of the diskettes containing the ENDF/B Pre-Processing codes by D.E. Cullen, and example data for use in implementing and testing these codes on a Personal Computer of the type IBM-PC/AT. Upon request the codes are available from the IAEA Nuclear Data Section, free of charge, on a series of 7 diskettes. (author)

  20. A proposal for further integration of the cyanobacteria under the Bacteriological Code.

    Science.gov (United States)

    Oren, Aharon

    2004-09-01

    This taxonomic note reviews the present status of the nomenclature of the cyanobacteria under the Bacteriological Code. No more than 13 names of cyanobacterial species have been proposed so far in the International Journal of Systematic and Evolutionary Microbiology (IJSEM)/International Journal of Systematic Bacteriology (IJSB), and of these only five are validly published. The cyanobacteria (Cyanophyta, blue-green algae) are also named under the Botanical Code, and the dual nomenclature system causes considerable confusion. This note calls for a more intense involvement of the International Committee on Systematics of Prokaryotes (ICSP), its Judicial Commission and its Subcommittee on the Taxonomy of Photosynthetic Prokaryotes in the nomenclature of the cyanobacteria under the Bacteriological Code. The establishment of minimal standards for the description of new species and genera should be encouraged in a way that will be acceptable to the botanical authorities as well. This should be followed by the publication of an 'Approved List of Names of Cyanobacteria' in IJSEM. The ultimate goal is to achieve a consensus nomenclature that is acceptable both to bacteriologists and to botanists, anticipating the future implementation of a universal 'Biocode' that would regulate the nomenclature of all organisms living on Earth.

  1. An approach to business process recovery from source code

    NARCIS (Netherlands)

    Pacini, Luiz A.; do Prado, Antonio F.; Lopes de Souza, Wanderley; Ferreira Pires, Luis; Latifi, S.

    2015-01-01

    Over time Business Process has become an asset for organization since it allows managing what happens within their environments. It is possible to automate some activities of the business process using information systems and accordingly decrease the execution time and increase the production. How-

  2. Development of the Log-in Process and the Operation Process for the VHTR-SI Process Dynamic Simulation Code

    International Nuclear Information System (INIS)

    Chang, Jiwoon; Shin, Youngjoon; Kim, Jihwan; Lee, Kiyoung; Lee, Wonjae; Chang, Jonghwa; Youn, Cheung

    2009-01-01

    The VHTR-SI process is a hydrogen production technique by using Sulfur and Iodine. The SI process for a hydrogen production uses a high temperature (about 950 .deg. C) of the He gas which is a cooling material for an energy sources. The Korea Atomic Energy Research Institute Dynamic Simulation Code (KAERI DySCo) is an integration application software that simulates the dynamic behavior of the VHTR-SI process. A dynamic modeling is used to express and model the behavior of the software system over time. The dynamic modeling deals with the control flow of system, the interaction of objects and the order of actions in view of a time and transition by using a sequence diagram and a state transition diagram. In this paper, we present an user log-in process and an operation process for the KAERI DySCo by using a sequence diagram and a state transition diagram

  3. A Test of Two Alternative Cognitive Processing Models: Learning Styles and Dual Coding

    Science.gov (United States)

    Cuevas, Joshua; Dawson, Bryan L.

    2018-01-01

    This study tested two cognitive models, learning styles and dual coding, which make contradictory predictions about how learners process and retain visual and auditory information. Learning styles-based instructional practices are common in educational environments despite a questionable research base, while the use of dual coding is less…

  4. A political perspective on business elites and institutional embeddedness in the UK code-issuing process

    NARCIS (Netherlands)

    Haxhi, I.; van Ees, H.; Sorge, A.

    2013-01-01

    Manuscript Type: Perspective Research Question/Issue: What is the role of institutional actors and business elites in the development of UK corporate governance codes? In the current paper, we explore the UK code-issuing process by focusing on the UK actors, their power and interplay. Research

  5. A Coding System for Qualitative Studies of the Information-Seeking Process in Computer Science Research

    Science.gov (United States)

    Moral, Cristian; de Antonio, Angelica; Ferre, Xavier; Lara, Graciela

    2015-01-01

    Introduction: In this article we propose a qualitative analysis tool--a coding system--that can support the formalisation of the information-seeking process in a specific field: research in computer science. Method: In order to elaborate the coding system, we have conducted a set of qualitative studies, more specifically a focus group and some…

  6. A Political Perspective on Business Elites and Institutional Embeddedness in the UK Code-Issuing Process

    NARCIS (Netherlands)

    Haxhi, Ilir; van Ees, Hans; Sorge, Arndt

    2013-01-01

    Manuscript TypePerspective Research Question/IssueWhat is the role of institutional actors and business elites in the development of UK corporate governance codes? In the current paper, we explore the UK code-issuing process by focusing on the UK actors, their power and interplay. Research

  7. A Political Perspective on Business Elites and Institutional Embeddedness in the UK Code-Issuing Process

    NARCIS (Netherlands)

    Haxhi, Ilir; van Ees, Hans; Sorge, Arndt

    Manuscript TypePerspective Research Question/IssueWhat is the role of institutional actors and business elites in the development of UK corporate governance codes? In the current paper, we explore the UK code-issuing process by focusing on the UK actors, their power and interplay. Research

  8. Code it rite the first time : automated invoice processing solution designed to ensure validity to field ticket coding

    Energy Technology Data Exchange (ETDEWEB)

    Chandler, G.

    2010-03-15

    An entrepreneur who ran 55 rigs for a major oilfield operator in Calgary has developed a solution for the oil industry that reduces field ticketing errors from 40 per cent to almost none. The Code-Rite not only simplifies field ticketing but can eliminate weeks of trying to balance authorization for expenditure (AFE) numbers. A service provider who wants a field ticket signed for billing purposes following a service call to a well site receives all pertinent information on a barcode that includes AFE number, location, routing, approval authority and mailing address. Attaching the label to the field ticket provides all the invoicing information needed. This article described the job profile, education and life experiences and opportunities that led the innovator to develop this technology that solves an industry-wide problem. Code-Rite is currently being used by 3 large upstream oil and gas operators and plans are underway to automate the entire invoice processing system. 1 fig.

  9. A Critical Appraisal of the Juvenile Justice System under Cameroon's 2005 Criminal Procedure Code: Emerging Challenges

    Directory of Open Access Journals (Sweden)

    S Tabe

    2012-03-01

    Full Text Available The objective of this article is to examine the changes introduced by the 2005 Cameroonian Criminal Procedure Code on matters of juvenile justice, considering that before this Code, juvenile justice in Cameroon was governed by extra-national laws. In undertaking this analysis, the article highlights the evolution of the administration of juvenile justice 50 years after independence of Cameroon. It also points out the various difficulties and shortcomings in the treatment of juvenile offenders in Cameroon since the enactment of the new Criminal Procedure Code. The article reveals that the 2005 Code is an amalgamation of all hitherto existing laws in the country that pertained to juvenile justice, and that despite the considerable amount of criticism it has received, the Code is clearly an improvement of the system of juvenile justice in Cameroon, since it represents a balance of the due process rights of young people, the protection of society and the special needs of young offenders. This is so because the drafters of the Code took a broad view of the old laws on juvenile justice. Also a wide range of groups were consulted, including criminal justice professionals, children’s service organisations, victims, parents, young offenders, educators, advocacy groups and social-policy analysts. However, to address the challenges that beset the juvenile justice system of Cameroon, the strategy of the government should be focussed on three areas: the prevention of youth crime, the provision of meaningful consequences for the actions of young people, and the rehabilitation and reintegration of young offenders. Cameroonian law should seek educative solutions rather than to impose prison sentences or other repressive measures on young offenders. Special courts to deal with young offenders should be established outside the regular penal system and should be provided with resources that are adequate for and appropriate to fostering their understanding of

  10. Fuel corrosion processes under waste disposal conditions

    International Nuclear Information System (INIS)

    Shoesmith, D.W.

    1999-09-01

    Under the oxidizing conditions likely to be encountered in the Yucca Mountain Repository, fuel dissolution is a corrosion process involving the coupling of the anodic dissolution of the fuel with the cathodic reduction of oxidants available within the repository. The oxidants potentially available to drive fuel corrosion are environmental oxygen, supplied by the transport through the permeable rock of the mountain and molecular and radical species produced by the radiolysis of available aerated water. The mechanism of these coupled anodic and cathodic reactions is reviewed in detail. While gaps in understanding remain, many kinetic features of these reactions have been studied in considerable detail, and a reasonably justified mechanism for fuel corrosion is available. The corrosion rate is determined primarily by environmental factors rather than the properties of the fuel. Thus, with the exception of increase in rate due to an increase in surface area, pre-oxidation of the fuel has little effect on the corrosion rate

  11. Fuel corrosion processes under waste disposal conditions

    Energy Technology Data Exchange (ETDEWEB)

    Shoesmith, D.W. [Univ. of Western Ontario, Dept. of Chemistry, London, Ontario (Canada)

    1999-09-01

    Under the oxidizing conditions likely to be encountered in the Yucca Mountain Repository, fuel dissolution is a corrosion process involving the coupling of the anodic dissolution of the fuel with the cathodic reduction of oxidants available within the repository. The oxidants potentially available to drive fuel corrosion are environmental oxygen, supplied by the transport through the permeable rock of the mountain and molecular and radical species produced by the radiolysis of available aerated water. The mechanism of these coupled anodic and cathodic reactions is reviewed in detail. While gaps in understanding remain, many kinetic features of these reactions have been studied in considerable detail, and a reasonably justified mechanism for fuel corrosion is available. The corrosion rate is determined primarily by environmental factors rather than the properties of the fuel. Thus, with the exception of increase in rate due to an increase in surface area, pre-oxidation of the fuel has little effect on the corrosion rate.

  12. Code Development on Fission Product Behavior under Severe Accident-Validation of Aerosol Sedimentation

    International Nuclear Information System (INIS)

    Ha, Kwang Soon; Kim, Sung Il; Jang, Jin Sung; Kim, Dong Ha

    2016-01-01

    The gas and aerosol phases of the radioactive materials move through the reactor coolant systems and containments as loaded on the carrier gas or liquid, such as steam or water. Most radioactive materials might escape in the form of aerosols from a nuclear power plant during a severe reactor accident, and it is very important to predict the behavior of these radioactive aerosols in the reactor cooling system and in the containment building under severe accident conditions. Aerosols are designated as very small solid particles or liquid droplets suspended in a gas phase. The suspended solid or liquid particles typically have a range of sizes of 0.01 m to 20 m. Aerosol concentrations in reactor accident analyses are typically less than 100 g/m3 and usually less than 1 g/m3. When there are continuing sources of aerosol to the gas phase or when there are complicated processes involving engineered safety features, much more complicated size distributions develop. It is not uncommon for aerosols in reactor containments to have bimodal size distributions for at least some significant periods of time early during an accident. Salient features of aerosol physics under reactor accident conditions that will affect the nature of the aerosols are (1) the formation of aerosol particles, (2) growth of aerosol particles, (3) shape of aerosol particles. At KAERI, a fission product module has been developed to predict the behaviors of the radioactive materials in the reactor coolant system under severe accident conditions. The fission product module consists of an estimation of the initial inventories, species release from the core, aerosol generation, gas transport, and aerosol transport. The final outcomes of the fission product module designate the radioactive gas and aerosol distribution in the reactor coolant system. The aerosol sedimentation models in the fission product module were validated using ABCOVE and LACE experiments. There were some discrepancies on the predicted

  13. Code Development on Fission Product Behavior under Severe Accident-Validation of Aerosol Sedimentation

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Kwang Soon; Kim, Sung Il; Jang, Jin Sung; Kim, Dong Ha [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The gas and aerosol phases of the radioactive materials move through the reactor coolant systems and containments as loaded on the carrier gas or liquid, such as steam or water. Most radioactive materials might escape in the form of aerosols from a nuclear power plant during a severe reactor accident, and it is very important to predict the behavior of these radioactive aerosols in the reactor cooling system and in the containment building under severe accident conditions. Aerosols are designated as very small solid particles or liquid droplets suspended in a gas phase. The suspended solid or liquid particles typically have a range of sizes of 0.01 m to 20 m. Aerosol concentrations in reactor accident analyses are typically less than 100 g/m3 and usually less than 1 g/m3. When there are continuing sources of aerosol to the gas phase or when there are complicated processes involving engineered safety features, much more complicated size distributions develop. It is not uncommon for aerosols in reactor containments to have bimodal size distributions for at least some significant periods of time early during an accident. Salient features of aerosol physics under reactor accident conditions that will affect the nature of the aerosols are (1) the formation of aerosol particles, (2) growth of aerosol particles, (3) shape of aerosol particles. At KAERI, a fission product module has been developed to predict the behaviors of the radioactive materials in the reactor coolant system under severe accident conditions. The fission product module consists of an estimation of the initial inventories, species release from the core, aerosol generation, gas transport, and aerosol transport. The final outcomes of the fission product module designate the radioactive gas and aerosol distribution in the reactor coolant system. The aerosol sedimentation models in the fission product module were validated using ABCOVE and LACE experiments. There were some discrepancies on the predicted

  14. Cognitive Processes Underlying the Artistic Experience

    Directory of Open Access Journals (Sweden)

    Alejandra Wah

    2017-08-01

    Full Text Available Based on the field of aesthetics, for centuries philosophers and more recently scientists have been concerned with understanding the artistic experience focusing on emotional responses to the perception of artworks. By contrast, in the last decades, evolutionary biology has been concerned with explaining the artistic experience by focusing on the cognitive processes underlying this experience. Up until now, the cognitive mechanisms that allow humans to experience objects and events as art remain largely unexplored and there is still no conventional use of terms for referring to the processes which may explain why the artistic experience is characteristically human and universal to human beings (Dissanayake, 1992, p. 24; Donald, 2006, p. 4. In this paper, I will first question whether it is productive to understand the artistic experience in terms of perception and emotion, and I will subsequently propose a possible alternative explanation to understand this experience. Drawing upon the work of Ellen Dissanayake (1992, 2000, 2015, Merlin Donald (2001, 2006, 2013, Antonio Damasio (1994, 2000, 2003, 2010, Barend van Heusden (2004, 2009, 2010, and Alejandra Wah (2014, I will argue that this experience is characterized by particular degrees of imagination and consciousness.

  15. Parallel processing approach to transform-based image coding

    Science.gov (United States)

    Normile, James O.; Wright, Dan; Chu, Ken; Yeh, Chia L.

    1991-06-01

    This paper describes a flexible parallel processing architecture designed for use in real time video processing. The system consists of floating point DSP processors connected to each other via fast serial links, each processor has access to a globally shared memory. A multiple bus architecture in combination with a dual ported memory allows communication with a host control processor. The system has been applied to prototyping of video compression and decompression algorithms. The decomposition of transform based algorithms for decompression into a form suitable for parallel processing is described. A technique for automatic load balancing among the processors is developed and discussed, results ar presented with image statistics and data rates. Finally techniques for accelerating the system throughput are analyzed and results from the application of one such modification described.

  16. The Kepler Science Data Processing Pipeline Source Code Road Map

    Science.gov (United States)

    Wohler, Bill; Jenkins, Jon M.; Twicken, Joseph D.; Bryson, Stephen T.; Clarke, Bruce Donald; Middour, Christopher K.; Quintana, Elisa Victoria; Sanderfer, Jesse Thomas; Uddin, Akm Kamal; Sabale, Anima; hide

    2016-01-01

    We give an overview of the operational concepts and architecture of the Kepler Science Processing Pipeline. Designed, developed, operated, and maintained by the Kepler Science Operations Center (SOC) at NASA Ames Research Center, the Science Processing Pipeline is a central element of the Kepler Ground Data System. The SOC consists of an office at Ames Research Center, software development and operations departments, and a data center which hosts the computers required to perform data analysis. The SOC's charter is to analyze stellar photometric data from the Kepler spacecraft and report results to the Kepler Science Office for further analysis. We describe how this is accomplished via the Kepler Science Processing Pipeline, including, the software algorithms. We present the high-performance, parallel computing software modules of the pipeline that perform transit photometry, pixel-level calibration, systematic error correction, attitude determination, stellar target management, and instrument characterization.

  17. Development of the multistep compound process calculation code

    Energy Technology Data Exchange (ETDEWEB)

    Kawano, Toshihiko [Kyushu Univ., Fukuoka (Japan)

    1998-03-01

    A program `cmc` has been developed to calculate the multistep compound (MSC) process by Feshback-Kerman-Koonin. A radial overlap integral in the transition matrix element is calculated microscopically, and comparisons are made for neutron induced {sup 93}Nb reactions. Strengths of the two-body interaction V{sub 0} are estimated from the total MSC cross sections. (author)

  18. Parallel processing of Monte Carlo code MCNP for particle transport problem

    Energy Technology Data Exchange (ETDEWEB)

    Higuchi, Kenji; Kawasaki, Takuji

    1996-06-01

    It is possible to vectorize or parallelize Monte Carlo codes (MC code) for photon and neutron transport problem, making use of independency of the calculation for each particle. Applicability of existing MC code to parallel processing is mentioned. As for parallel computer, we have used both vector-parallel processor and scalar-parallel processor in performance evaluation. We have made (i) vector-parallel processing of MCNP code on Monte Carlo machine Monte-4 with four vector processors, (ii) parallel processing on Paragon XP/S with 256 processors. In this report we describe the methodology and results for parallel processing on two types of parallel or distributed memory computers. In addition, we mention the evaluation of parallel programming environments for parallel computers used in the present work as a part of the work developing STA (Seamless Thinking Aid) Basic Software. (author)

  19. The enhanced variance propagation code for the Idaho Chemical Processing Plant

    International Nuclear Information System (INIS)

    Kern, E.A.; Zack, N.R.; Britschgi, J.J.

    1992-01-01

    The Variance Propagation (VP) Code was developed by the Los Alamos National Laboratory's Safeguard's Systems Group to provide off-line variance propagation and systems analysis for nuclear material processing facilities. The code can also be used as a tool in the design and evaluation of material accounting systems. In this regard , the VP code was enhanced to incorporate a model of the material accountability measurements used in the Idaho Chemical Processing Plant operated by the Westinghouse Idaho Nuclear Company. Inputs to the code were structured to account for the dissolves/headend process, the waste streams, process performed to determine the sensitivity of measurement and sampling errors to the overall material balance error. We determined that the material balance error is very sensitive to changes in the sampling errors. 3 refs

  20. Tech-X Corporation releases simulation code for solving complex problems in plasma physics : VORPAL code provides a robust environment for simulating plasma processes in high-energy physics, IC fabrications and material processing applications

    CERN Multimedia

    2005-01-01

    Tech-X Corporation releases simulation code for solving complex problems in plasma physics : VORPAL code provides a robust environment for simulating plasma processes in high-energy physics, IC fabrications and material processing applications

  1. Modification of fuel performance code to evaluate iron-based alloy behavior under LOCA scenario

    Energy Technology Data Exchange (ETDEWEB)

    Giovedi, Claudia; Martins, Marcelo Ramos, E-mail: claudia.giovedi@labrisco.usp.br, E-mail: mrmartin@usp.br [Laboratorio de Analise, Avaliacao e Gerenciamento de Risco (LabRisco/POLI/USP), São Paulo, SP (Brazil); Abe, Alfredo; Muniz, Rafael O.R.; Gomes, Daniel de Souza; Silva, Antonio Teixeira e, E-mail: ayabe@ipen.br, E-mail: dsgomes@ipen.br, E-mail: teixiera@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    Accident tolerant fuels (ATF) has been studied since the Fukushima Daiichi accident in the research efforts to develop new materials which under accident scenarios could maintain the fuel rod integrity for a longer period compared to the cladding and fuel system usually utilized in Pressurized Water Reactors (PWR). The efforts have been focused on new materials applied as cladding, then iron-base alloys appear as a possible candidate. The aim of this paper is to implement modifications in a fuel performance code to evaluate the behavior of iron based alloys under Loss-of-Coolant Accident (LOCA) scenario. For this, initially the properties related to the thermal and mechanical behavior of iron-based alloys were obtained from the literature, appropriately adapted and introduced in the fuel performance code subroutines. The adopted approach was step by step modifications, where different versions of the code were created. The assessment of the implemented modification was carried out simulating an experiment available in the open literature (IFA-650.5) related to zirconium-based alloy fuel rods submitted to LOCA conditions. The obtained results for the iron-based alloy were compared to those obtained using the regular version of the fuel performance code for zircaloy-4. The obtained results have shown that the most important properties to be changed are those from the subroutines related to the mechanical properties of the cladding. The results obtained have shown that the burst is observed at a longer time for fuel rods with iron-based alloy, indicating the potentiality of this material to be used as cladding with ATF purposes. (author)

  2. Modification of fuel performance code to evaluate iron-based alloy behavior under LOCA scenario

    International Nuclear Information System (INIS)

    Giovedi, Claudia; Martins, Marcelo Ramos; Abe, Alfredo; Muniz, Rafael O.R.; Gomes, Daniel de Souza; Silva, Antonio Teixeira e

    2017-01-01

    Accident tolerant fuels (ATF) has been studied since the Fukushima Daiichi accident in the research efforts to develop new materials which under accident scenarios could maintain the fuel rod integrity for a longer period compared to the cladding and fuel system usually utilized in Pressurized Water Reactors (PWR). The efforts have been focused on new materials applied as cladding, then iron-base alloys appear as a possible candidate. The aim of this paper is to implement modifications in a fuel performance code to evaluate the behavior of iron based alloys under Loss-of-Coolant Accident (LOCA) scenario. For this, initially the properties related to the thermal and mechanical behavior of iron-based alloys were obtained from the literature, appropriately adapted and introduced in the fuel performance code subroutines. The adopted approach was step by step modifications, where different versions of the code were created. The assessment of the implemented modification was carried out simulating an experiment available in the open literature (IFA-650.5) related to zirconium-based alloy fuel rods submitted to LOCA conditions. The obtained results for the iron-based alloy were compared to those obtained using the regular version of the fuel performance code for zircaloy-4. The obtained results have shown that the most important properties to be changed are those from the subroutines related to the mechanical properties of the cladding. The results obtained have shown that the burst is observed at a longer time for fuel rods with iron-based alloy, indicating the potentiality of this material to be used as cladding with ATF purposes. (author)

  3. Industrial process heat case studies. [PROSYS/ECONMAT code

    Energy Technology Data Exchange (ETDEWEB)

    Hooker, D.W.; May, E.K.; West, R.E.

    1980-05-01

    Commercially available solar collectors have the potential to provide a large fraction of the energy consumed for industrial process heat (IPH). Detailed case studies of individual industrial plants are required in order to make an accurate assessment of the technical and economic feasibility of applications. This report documents the results of seven such case studies. The objectives of the case study program are to determine the near-term feasibility of solar IPH in selected industries, identify energy conservation measures, identify conditions of IPH systems that affect solar applications, test SERI's IPH analysis software (PROSYS/ECONOMAT), disseminate information to the industrial community, and provide inputs to the SERI research program. The detailed results from the case studies are presented. Although few near-term, economical solar applications were found, the conditions that would enhance the opportunities for solar IPH applications are identified.

  4. Process of cross section generation for radiation shielding calculations, using the NJOY code

    International Nuclear Information System (INIS)

    Ono, S.; Corcuera, R.P.

    1986-10-01

    The process of multigroup cross sections generation for radiation shielding calculations, using the NJOY code, is explained. Photon production cross sections, processed by the GROUPR module, and photon interaction cross sections processed by the GAMINR are given. These data are compared with the data produced by the AMPX system and published data. (author) [pt

  5. Hybrid digital-analog coding with bandwidth expansion for correlated Gaussian sources under Rayleigh fading

    Science.gov (United States)

    Yahampath, Pradeepa

    2017-12-01

    Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.

  6. Pre-processing of input files for the AZTRAN code; Pre procesamiento de archivos de entrada para el codigo AZTRAN

    Energy Technology Data Exchange (ETDEWEB)

    Vargas E, S. [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Ibarra, G., E-mail: samuel.vargas@inin.gob.mx [IPN, Av. Instituto Politecnico Nacional s/n, 07738 Ciudad de Mexico (Mexico)

    2017-09-15

    The AZTRAN code began to be developed in the Nuclear Engineering Department of the Escuela Superior de Fisica y Matematicas (ESFM) of the Instituto Politecnico Nacional (IPN) with the purpose of numerically solving various models arising from the physics and engineering of nuclear reactors. The code is still under development and is part of the AZTLAN platform: Development of a Mexican platform for the analysis and design of nuclear reactors. Due to the complexity to generate an input file for the code, a script based on D language is developed, with the purpose of making its elaboration easier, based on a new input file format which includes specific cards, which have been divided into two blocks, mandatory cards and optional cards, including a pre-processing of the input file to identify possible errors within it, as well as an image generator for the specific problem based on the python interpreter. (Author)

  7. Characterization of the MCNPX computer code in micro processed architectures

    International Nuclear Information System (INIS)

    Almeida, Helder C.; Dominguez, Dany S.; Orellana, Esbel T.V.; Milian, Felix M.

    2009-01-01

    The MCNPX (Monte Carlo N-Particle extended) can be used to simulate the transport of several types of nuclear particles, using probabilistic methods. The technique used for MCNPX is to follow the history of each particle from its origin to its extinction that can be given by absorption, escape or other reasons. To obtain accurate results in simulations performed with the MCNPX is necessary to process a large number of histories, which demand high computational cost. Currently the MCNPX can be installed in virtually all computing platforms available, however there is virtually no information on the performance of the application in each. This paper studies the performance of MCNPX, to work with electrons and photons in phantom Faux on two platforms used by most researchers, Windows and Li nux. Both platforms were tested on the same computer to ensure the reliability of the hardware in the measures of performance. The performance of MCNPX was measured by time spent to run a simulation, making the variable time the main measure of comparison. During the tests the difference in performance between the two platforms MCNPX was evident. In some cases we were able to gain speed more than 10% only with the exchange platforms, without any specific optimization. This shows the relevance of the study to optimize this tool on the platform most appropriate for its use. (author)

  8. Processes of code status transitions in hospitalized patients with advanced cancer.

    Science.gov (United States)

    El-Jawahri, Areej; Lau-Min, Kelsey; Nipp, Ryan D; Greer, Joseph A; Traeger, Lara N; Moran, Samantha M; D'Arpino, Sara M; Hochberg, Ephraim P; Jackson, Vicki A; Cashavelly, Barbara J; Martinson, Holly S; Ryan, David P; Temel, Jennifer S

    2017-12-15

    Although hospitalized patients with advanced cancer have a low chance of surviving cardiopulmonary resuscitation (CPR), the processes by which they change their code status from full code to do not resuscitate (DNR) are unknown. We conducted a mixed-methods study on a prospective cohort of hospitalized patients with advanced cancer. Two physicians used a consensus-driven medical record review to characterize processes that led to code status order transitions from full code to DNR. In total, 1047 hospitalizations were reviewed among 728 patients. Admitting clinicians did not address code status in 53% of hospitalizations, resulting in code status orders of "presumed full." In total, 275 patients (26.3%) transitioned from full code to DNR, and 48.7% (134 of 275 patients) of those had an order of "presumed full" at admission; however, upon further clarification, the patients expressed that they had wished to be DNR before the hospitalization. We identified 3 additional processes leading to order transition from full code to DNR acute clinical deterioration (15.3%), discontinuation of cancer-directed therapy (17.1%), and education about the potential harms/futility of CPR (15.3%). Compared with discontinuing therapy and education, transitions because of acute clinical deterioration were associated with less patient involvement (P = .002), a shorter time to death (P cancer were because of full code orders in patients who had a preference for DNR before hospitalization. Transitions due of acute clinical deterioration were associated with less patient engagement and a higher likelihood of inpatient death. Cancer 2017;123:4895-902. © 2017 American Cancer Society. © 2017 American Cancer Society.

  9. SHARAKU: an algorithm for aligning and clustering read mapping profiles of deep sequencing in non-coding RNA processing.

    Science.gov (United States)

    Tsuchiya, Mariko; Amano, Kojiro; Abe, Masaya; Seki, Misato; Hase, Sumitaka; Sato, Kengo; Sakakibara, Yasubumi

    2016-06-15

    Deep sequencing of the transcripts of regulatory non-coding RNA generates footprints of post-transcriptional processes. After obtaining sequence reads, the short reads are mapped to a reference genome, and specific mapping patterns can be detected called read mapping profiles, which are distinct from random non-functional degradation patterns. These patterns reflect the maturation processes that lead to the production of shorter RNA sequences. Recent next-generation sequencing studies have revealed not only the typical maturation process of miRNAs but also the various processing mechanisms of small RNAs derived from tRNAs and snoRNAs. We developed an algorithm termed SHARAKU to align two read mapping profiles of next-generation sequencing outputs for non-coding RNAs. In contrast with previous work, SHARAKU incorporates the primary and secondary sequence structures into an alignment of read mapping profiles to allow for the detection of common processing patterns. Using a benchmark simulated dataset, SHARAKU exhibited superior performance to previous methods for correctly clustering the read mapping profiles with respect to 5'-end processing and 3'-end processing from degradation patterns and in detecting similar processing patterns in deriving the shorter RNAs. Further, using experimental data of small RNA sequencing for the common marmoset brain, SHARAKU succeeded in identifying the significant clusters of read mapping profiles for similar processing patterns of small derived RNA families expressed in the brain. The source code of our program SHARAKU is available at http://www.dna.bio.keio.ac.jp/sharaku/, and the simulated dataset used in this work is available at the same link. Accession code: The sequence data from the whole RNA transcripts in the hippocampus of the left brain used in this work is available from the DNA DataBank of Japan (DDBJ) Sequence Read Archive (DRA) under the accession number DRA004502. yasu@bio.keio.ac.jp Supplementary data are available

  10. Word attributes and lateralization revisited: implications for dual coding and discrete versus continuous processing.

    Science.gov (United States)

    Boles, D B

    1989-01-01

    Three attributes of words are their imageability, concreteness, and familiarity. From a literature review and several experiments, I previously concluded (Boles, 1983a) that only familiarity affects the overall near-threshold recognition of words, and that none of the attributes affects right-visual-field superiority for word recognition. Here these conclusions are modified by two experiments demonstrating a critical mediating influence of intentional versus incidental memory instructions. In Experiment 1, subjects were instructed to remember the words they were shown, for subsequent recall. The results showed effects of both imageability and familiarity on overall recognition, as well as an effect of imageability on lateralization. In Experiment 2, word-memory instructions were deleted and the results essentially reinstated the findings of Boles (1983a). It is concluded that right-hemisphere imagery processes can participate in word recognition under intentional memory instructions. Within the dual coding theory (Paivio, 1971), the results argue that both discrete and continuous processing modes are available, that the modes can be used strategically, and that continuous processing can occur prior to response stages.

  11. The vector and parallel processing of MORSE code on Monte Carlo Machine

    International Nuclear Information System (INIS)

    Hasegawa, Yukihiro; Higuchi, Kenji.

    1995-11-01

    Multi-group Monte Carlo Code for particle transport, MORSE is modified for high performance computing on Monte Carlo Machine Monte-4. The method and the results are described. Monte-4 was specially developed to realize high performance computing of Monte Carlo codes for particle transport, which have been difficult to obtain high performance in vector processing on conventional vector processors. Monte-4 has four vector processor units with the special hardware called Monte Carlo pipelines. The vectorization and parallelization of MORSE code and the performance evaluation on Monte-4 are described. (author)

  12. Characteristics of a Dairy Process under Uncertainty

    DEFF Research Database (Denmark)

    Cheng, Hongyuan; Friis, Alan

    2007-01-01

    set of physical property models for diary products were developed and built into PRO/II system. Milk products viscosity and process system pressure drop were employed as the process characteristic parameters to determine a process operation window. The flexibility of the operation window vertexes...... was evaluated with a minimization of the process pasteurization and cooling temperatures through vertex enumeration method. The quantitative analysis of the dairy process established a framework in developing of different flexible units, such as integrated milk and milk-based product productions, multi...

  13. Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process

    Science.gov (United States)

    McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.

    1999-01-01

    This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.

  14. Energy meshing techniques for processing ENDF/B-VI cross sections using the AMPX code system

    International Nuclear Information System (INIS)

    Dunn, M.E.; Greene, N.M.; Leal, L.C.

    1999-01-01

    Modern techniques for the establishment of criticality safety for fissile systems invariably require the use of neutronic transport codes with applicable cross-section data. Accurate cross-section data are essential for solving the Boltzmann Transport Equation for fissile systems. In the absence of applicable critical experimental data, the use of independent calculational methods is crucial for the establishment of subcritical limits. Moreover, there are various independent modern transport codes available to the criticality safety analyst (e.g., KENO V.a., MCNP, and MONK). In contrast, there is currently only one complete software package that processes data from the Version 6 format of the Evaluated Nuclear Data File (ENDF) to a format useable by criticality safety codes. To facilitate independent cross-section processing, Oak Ridge National Laboratory (ORNL) is upgrading the AMPX code system to enable independent processing of Version 6 formats using state-of-the-art procedures. The AMPX code system has been in continuous use at ORNL since the early 1970s and is the premier processor for providing multigroup cross sections for criticality safety analysis codes. Within the AMPX system, the module POLIDENT is used to access the resonance parameters in File 2 of an ENDF/B library, generate point cross-section data, and combine the cross sections with File 3 point data. At the heart of any point cross-section processing code is the generation of a suitable energy mesh for representing the data. The purpose of this work is to facilitate the AMPX upgrade through the development of a new and innovative energy meshing technique for processing point cross-section data

  15. The underlying event in hard scattering processes

    International Nuclear Information System (INIS)

    Field, R.

    2002-01-01

    The authors study the behavior of the underlying event in hard scattering proton-antiproton collisions at 1.8 TeV and compare with the QCD Monte-Carlo models. The underlying event is everything except the two outgoing hard scattered jets and receives contributions from the beam-beam remnants plus initial and final-state radiation. The data indicate that neither ISAJET or HERWIG produce enough charged particles (with p T > 0.5 GeV/c) from the beam-beam remnant component and that ISAJET produces too many charged particles from initial-state radiation. PYTHIA which uses multiple parton scattering to enhance the underlying event does the best job describing the data

  16. Development of system analysis code for pyrochemical process using molten salt electrorefining

    International Nuclear Information System (INIS)

    Tozawa, K.; Matsumoto, T.; Kakehi, I.

    2000-04-01

    This report describes accomplishment of development of a cathode processor calculation code to simulate the mass and heat transfer phenomena with the distillation process and development of an analytical model for cooling behavior of the pyrochemical process cell on personal computers. The pyrochemical process using molten salt electrorefining would introduce new technologies for new fuels of particle oxide, particle nitride and metallic fuels. The cathode processor calculation code with distillation process was developed. A code validation calculation has been conducted on the basic of the benchmark problem for natural convection in a square cavity. Results by using the present code agreed well for the velocity-temperature fields, the maximum velocity and its location with the benchmark solution published in a paper. The functions have been added to advance the reality in simulation and to increase the efficiency in utilization. The test run has been conducted using the code with the above modification for an axisymmetric enclosed vessel simulating a cathode processor, and the capability of the distillation process simulation with the code has been confirmed. An analytical model for cooling behavior of the pyrochemical process cell was developed. The analytical model was selected by comparing benchmark analysis with detailed analysis on engineering workstation. Flow and temperature distributions were confirmed by the result of steady state analysis. In the result of transient cooling analysis, an initial transient peak of temperature occurred at balanced heat condition in the steady-state analysis. Final gas temperature distribution was dependent on gas circulation flow in transient condition. Then there were different final gas temperature distributions on the basis of the result of steady-state analysis. This phenomenon has a potential for it's own metastable condition. Therefore it was necessary to design gas cooling flow pattern without cooling gas circulation

  17. PUFF-III: A Code for Processing ENDF Uncertainty Data Into Multigroup Covariance Matrices

    International Nuclear Information System (INIS)

    Dunn, M.E.

    2000-01-01

    PUFF-III is an extension of the previous PUFF-II code that was developed in the 1970s and early 1980s. The PUFF codes process the Evaluated Nuclear Data File (ENDF) covariance data and generate multigroup covariance matrices on a user-specified energy grid structure. Unlike its predecessor, PUFF-III can process the new ENDF/B-VI data formats. In particular, PUFF-III has the capability to process the spontaneous fission covariances for fission neutron multiplicity. With regard to the covariance data in File 33 of the ENDF system, PUFF-III has the capability to process short-range variance formats, as well as the lumped reaction covariance data formats that were introduced in ENDF/B-V. In addition to the new ENDF formats, a new directory feature is now available that allows the user to obtain a detailed directory of the uncertainty information in the data files without visually inspecting the ENDF data. Following the correlation matrix calculation, PUFF-III also evaluates the eigenvalues of each correlation matrix and tests each matrix for positive definiteness. Additional new features are discussed in the manual. PUFF-III has been developed for implementation in the AMPX code system, and several modifications were incorporated to improve memory allocation tasks and input/output operations. Consequently, the resulting code has a structure that is similar to other modules in the AMPX code system. With the release of PUFF-III, a new and improved covariance processing code is available to process ENDF covariance formats through Version VI

  18. Simulation codes of chemical separation process of spent fuel reprocessing. Tool for process development and safety research

    International Nuclear Information System (INIS)

    Asakura, Toshihide; Sato, Makoto; Matsumura, Masakazu; Morita, Yasuji

    2005-01-01

    This paper reviews the succeeding development and utilization of Extraction System Simulation Code for Advanced Reprocessing (ESSCAR). From the viewpoint of development, more tests with spent fuel and calculations should be performed with better understanding of the physico-chemical phenomena in a separation process. From the viewpoint of process safety research on fuel cycle facilities, it is important to know the process behavior of a key substance; being highly reactive but existing only trace amount. (author)

  19. Working under the PJVA gas processing agreement

    International Nuclear Information System (INIS)

    Collins, S.

    1996-01-01

    The trend in the natural gas industry is towards custom processing. New gas reserves tend to be smaller and in tighter reservoirs than in the past. This has resulted in plants having processing and transportation capacity available to be leased to third parties. Major plant operators and owners are finding themselves in the business of custom processing in a more focused way. Operators recognize that the dilution of operating costs can result in significant benefits to the plant owners as well as the third party processor. The relationship between the gas processor and the gas producer as they relate to the Petroleum Joint Venture Association (PJVA) Gas Processing Agreement were discussed. Details of the standard agreement that clearly defines the responsibilities of the third party producer and the processor were explained. In addition to outlining obligations of the parties, it also provides a framework for fee negotiation. It was concluded that third party processing can lower facility operating costs, extend facility life, and keep Canadian gas more competitive in holding its own in North American gas markets

  20. When Content Matters: The Role of Processing Code in Tactile Display Design.

    Science.gov (United States)

    Ferris, Thomas K; Sarter, Nadine

    2010-01-01

    The distribution of tasks and stimuli across multiple modalities has been proposed as a means to support multitasking in data-rich environments. Recently, the tactile channel and, more specifically, communication via the use of tactile/haptic icons have received considerable interest. Past research has examined primarily the impact of concurrent task modality on the effectiveness of tactile information presentation. However, it is not well known to what extent the interpretation of iconic tactile patterns is affected by another attribute of information: the information processing codes of concurrent tasks. In two driving simulation studies (n = 25 for each), participants decoded icons composed of either spatial or nonspatial patterns of vibrations (engaging spatial and nonspatial processing code resources, respectively) while concurrently interpreting spatial or nonspatial visual task stimuli. As predicted by Multiple Resource Theory, performance was significantly worse (approximately 5-10 percent worse) when the tactile icons and visual tasks engaged the same processing code, with the overall worst performance in the spatial-spatial task pairing. The findings from these studies contribute to an improved understanding of information processing and can serve as input to multidimensional quantitative models of timesharing performance. From an applied perspective, the results suggest that competition for processing code resources warrants consideration, alongside other factors such as the naturalness of signal-message mapping, when designing iconic tactile displays. Nonspatially encoded tactile icons may be preferable in environments which already rely heavily on spatial processing, such as car cockpits.

  1. The GC computer code for flow sheet simulation of pyrochemical processing of spent nuclear fuels

    International Nuclear Information System (INIS)

    Ahluwalia, R.K.; Geyer, H.K.

    1996-01-01

    The GC computer code has been developed for flow sheet simulation of pyrochemical processing of spent nuclear fuel. It utilizes a robust algorithm SLG for analyzing simultaneous chemical reactions between species distributed across many phases. Models have been developed for analysis of the oxide fuel reduction process, salt recovery by electrochemical decomposition of lithium oxide, uranium separation from the reduced fuel by electrorefining, and extraction of fission products into liquid cadmium. The versatility of GC is demonstrated by applying the code to a flow sheet of current interest

  2. A multi-level code for metallurgical effects in metal-forming processes

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, P.A.; Silling, S.A. [Sandia National Labs., Albuquerque, NM (United States). Computational Physics and Mechanics Dept.; Hughes, D.A.; Bammann, D.J.; Chiesa, M.L. [Sandia National Labs., Livermore, CA (United States)

    1997-08-01

    The authors present the final report on a Laboratory-Directed Research and Development (LDRD) project, A Multi-level Code for Metallurgical Effects in metal-Forming Processes, performed during the fiscal years 1995 and 1996. The project focused on the development of new modeling capabilities for simulating forging and extrusion processes that typically display phenomenology occurring on two different length scales. In support of model fitting and code validation, ring compression and extrusion experiments were performed on 304L stainless steel, a material of interest in DOE nuclear weapons applications.

  3. Obsolescence – understanding the underlying processes

    NARCIS (Netherlands)

    Thomsen, A.F.

    2017-01-01

    Obsolescence, defined as the process of declining performance of buildings, is a serious threat for the value, the usefulness and the life span of built properties. Thomsen and van der Flier (2011) developed a model in which obsolescence is categorised on the basis of two distinctions, i.e. between

  4. Algebraic and stochastic coding theory

    CERN Document Server

    Kythe, Dave K

    2012-01-01

    Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.

  5. Video processing for human perceptual visual quality-oriented video coding.

    Science.gov (United States)

    Oh, Hyungsuk; Kim, Wonha

    2013-04-01

    We have developed a video processing method that achieves human perceptual visual quality-oriented video coding. The patterns of moving objects are modeled by considering the limited human capacity for spatial-temporal resolution and the visual sensory memory together, and an online moving pattern classifier is devised by using the Hedge algorithm. The moving pattern classifier is embedded in the existing visual saliency with the purpose of providing a human perceptual video quality saliency model. In order to apply the developed saliency model to video coding, the conventional foveation filtering method is extended. The proposed foveation filter can smooth and enhance the video signals locally, in conformance with the developed saliency model, without causing any artifacts. The performance evaluation results confirm that the proposed video processing method shows reliable improvements in the perceptual quality for various sequences and at various bandwidths, compared to existing saliency-based video coding methods.

  6. Development of a new flux map processing code for moveable detector system in PWR

    Energy Technology Data Exchange (ETDEWEB)

    Li, W.; Lu, H.; Li, J.; Dang, Z.; Zhang, X. [China Nuclear Power Technology Research Institute, 47 F/A Jiangsu Bldg., Yitian Road, Futian District, Shenzhen 518026 (China); Wu, Y.; Fan, X. [Information Technology Center, China Guangdong Nuclear Power Group, Shenzhen 518000 (China)

    2013-07-01

    This paper presents an introduction to the development of the flux map processing code MAPLE developed by China Nuclear Power Technology Research Institute (CNPPJ), China Guangdong Nuclear Power Group (CGN). The method to get the three-dimensional 'measured' power distribution according to measurement signal has also been described. Three methods, namely, Weight Coefficient Method (WCM), Polynomial Expand Method (PEM) and Thin Plane Spline (TPS) method, have been applied to fit the deviation between measured and predicted results for two-dimensional radial plane. The measured flux map data of the LINGAO nuclear power plant (NPP) is processed using MAPLE as a test case to compare the effectiveness of the three methods, combined with a 3D neutronics code COCO. Assembly power distribution results show that MAPLE results are reasonable and satisfied. More verification and validation of the MAPLE code will be carried out in future. (authors)

  7. Behavioral processes underlying the decline of narcissists' popularity over time.

    Science.gov (United States)

    Leckelt, Marius; Küfner, Albrecht C P; Nestler, Steffen; Back, Mitja D

    2015-11-01

    Following a dual-pathway approach to the social consequences of grandiose narcissism, we investigated the behavioral processes underlying (a) the decline of narcissists' popularity in social groups over time and (b) how this is differentially influenced by the 2 narcissism facets admiration and rivalry. In a longitudinal laboratory study, participants (N = 311) first provided narcissism self-reports using the Narcissistic Personality Inventory and the Narcissistic Admiration and Rivalry Questionnaire, and subsequently interacted with each other in small groups in weekly sessions over the course of 3 weeks. All sessions were videotaped and trained raters coded participants' behavior during the interactions. Within the sessions participants provided mutual ratings on assertiveness, untrustworthiness, and likability. Results showed that (a) over time narcissists become less popular and (b) this is reflected in an initially positive but decreasing effect of narcissistic admiration as well as an increasing negative effect of narcissistic rivalry. As hypothesized, these patterns of results could be explained by means of 2 diverging behavioral pathways: The negative narcissistic pathway (i.e., arrogant-aggressive behavior and being seen as untrustworthy) plays an increasing role and is triggered by narcissistic rivalry, whereas the relevance of the positive narcissistic pathway (i.e., dominant-expressive behavior and being seen as assertive) triggered by narcissistic admiration decreases over time. These findings underline the utility of a behavioral pathway approach for disentangling the complex effects of personality on social outcomes. (c) 2015 APA, all rights reserved).

  8. [Quality management and strategic consequences of assessing documentation and coding under the German Diagnostic Related Groups system].

    Science.gov (United States)

    Schnabel, M; Mann, D; Efe, T; Schrappe, M; V Garrel, T; Gotzen, L; Schaeg, M

    2004-10-01

    The introduction of the German Diagnostic Related Groups (D-DRG) system requires redesigning administrative patient management strategies. Wrong coding leads to inaccurate grouping and endangers the reimbursement of treatment costs. This situation emphasizes the roles of documentation and coding as factors of economical success. The aims of this study were to assess the quantity and quality of initial documentation and coding (ICD-10 and OPS-301) and find operative strategies to improve efficiency and strategic means to ensure optimal documentation and coding quality. In a prospective study, documentation and coding quality were evaluated in a standardized way by weekly assessment. Clinical data from 1385 inpatients were processed for initial correctness and quality of documentation and coding. Principal diagnoses were found to be accurate in 82.7% of cases, inexact in 7.1%, and wrong in 10.1%. Effects on financial returns occurred in 16%. Based on these findings, an optimized, interdisciplinary, and multiprofessional workflow on medical documentation, coding, and data control was developed. Workflow incorporating regular assessment of documentation and coding quality is required by the DRG system to ensure efficient accounting of hospital services. Interdisciplinary and multiprofessional cooperation is recognized to be an important factor in establishing an efficient workflow in medical documentation and coding.

  9. Motivational Processes Underlying Substance Abuse Disorder

    Science.gov (United States)

    King, Christopher P.; Ferrario, Carrie R.

    2016-01-01

    Drug addiction is a syndrome of dysregulated motivation, evidenced by intense drug craving and compulsive drug-seeking behavior. In the search for common neurobiological substrates of addiction to different classes of drugs, behavioral neuroscientists have attempted to determine the neural basis for a number of motivational concepts and describe how they are changed by repeated drug use. Here, we describe these concepts and summarize previous work describing three major neural systems that play distinct roles in different conceptual aspects of motivation: (1) a nigrostriatal system that is involved in two forms of instrumental learning, (2) a ventral striatal system that is involved in Pavlovian incentive motivation and negative reinforcement, and (3) frontal cortical areas that regulate decision making and motivational processes. Within striatal systems, drug addiction can involve a transition from goal-oriented, incentive processes to automatic, habit-based responding. In the cortex, weak inhibitory control is a predisposing factor to, as well as a consequence of, repeated drug intake. However, these transitions are not absolute, and addiction can occur without a transition to habit-based responding, occurring as a result of the overvaluation of drug outcomes and hypersensitivity to incentive properties of drug-associated cues. Finally, we point out that addiction is not monolithic and can depend not only on individual differences between addicts, but also on the neurochemical action of specific drug classes. PMID:26475159

  10. Making Visible the Coding Process: Using Qualitative Data Software in a Post-Structural Study

    Science.gov (United States)

    Ryan, Mary

    2009-01-01

    Qualitative research methods require transparency to ensure the "trustworthiness" of the data analysis. The intricate processes of organising, coding and analysing the data are often rendered invisible in the presentation of the research findings, which requires a "leap of faith" for the reader. Computer assisted data analysis software can be used…

  11. Recent development for the ITS code system: Parallel processing and visualization

    International Nuclear Information System (INIS)

    Fan, W.C.; Turner, C.D.; Halbleib, J.A. Sr.; Kensek, R.P.

    1996-01-01

    A brief overview is given for two software developments related to the ITS code system. These developments provide parallel processing and visualization capabilities and thus allow users to perform ITS calculations more efficiently. Timing results and a graphical example are presented to demonstrate these capabilities

  12. The Therapy Process Observational Coding System for Child Psychotherapy Strategies Scale

    Science.gov (United States)

    McLeod, Bryce D.; Weisz, John R.

    2010-01-01

    Most everyday child and adolescent psychotherapy does not follow manuals that document the procedures. Consequently, usual clinical care has remained poorly understood and rarely studied. The Therapy Process Observational Coding System for Child Psychotherapy-Strategies scale (TPOCS-S) is an observational measure of youth psychotherapy procedures…

  13. Summary of ENDF/B Pre-Processing Codes June 1983

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1983-06-01

    This is the summary documentation for the 1983 version of the ENDF/B Pre-Processing Codes LINEAR, RECENT, SIGMA1, GROUPIE, EVALPLOT, MERGER, DICTION, COMPLOT, CONVERT. This summary documentation is merely a copy of the comment cards that appear at the beginning of each programme; these comment cards always reflect the latest status of input options, etc

  14. Discrete processes modelling and geometry description in RTS and T code

    International Nuclear Information System (INIS)

    Degtyarev, I.I.; Liashenko, O.A.; Lokhovitskii, A.E.; Yazynin, I.A.; Belyakov-Bodin, V.I.; Blokhin, A.I.

    2001-01-01

    This paper describes the recent version of the RTS and T code system. RTS and T performs detailed simulations of many types of particles transport in complex 3D geometries in the energy range from a part of eV up to 20 TeV. A description of the main processes is given. (author)

  15. The modeling of fuel rod behaviour under RIA conditions in the code DYN3D

    International Nuclear Information System (INIS)

    Rohde, U.

    1998-01-01

    A description of the fuel rod behaviour and heat transfer model used in the code DYN3D for nuclear reactor core dynamic simulations is given. Besides the solution of heat conduction equations in fuel and cladding, the model comprises detailed description of heat transfer in the gas gap by conduction, radiation and fuel-cladding contact. The gas gap behaviour is modeled in a mechanistic way taking into account transient changes of the gas gap parameters based on given conditions for the initial state. Thermal, elastic and plastic deformations of fuel and cladding are taken into account within 1D approximation. Numerical studies concerning the fuel rod behaviour under RIA conditions in power reactors are reported. Fuel rod behaviour at high pressures and flow rates in power reactors is different from the behaviour under atmospheric pressure and stagnant flow conditions in the experiments. The mechanisms of fuel rod failure for fresh and burned fuel reported from the literature can be qualitatively reproduced by the DYN3D model. (author)

  16. Rod behaviour under base load, load follow and frequency control operation: CYRANO 2 code predictions versus experimental results

    International Nuclear Information System (INIS)

    Gautier, B.; Raybaud, A.

    1984-01-01

    The French PWR reactors are now currently operating under load follow and frequency control. In order to demonstrate that these operating conditions were not able to increase the fuel failure rate, fuel rod behaviour calculations have been performed by E.D.F. with CYRANO 2 code. In parallel with these theoretical calculations, code predictions have been compared to experimental results. The paper presents some of the comparisons performed on 17x17 fuel irradiated in FESSENHEIM 2 up to 30 GWd/tU under base load operation and in the CAP reactor under load follow and frequency control conditions. It is shown that experimental results can be predicted with a reasonable accuracy by CYRANO 2 code. The experimental work was carried out under joint R and D programs by EDF, FRAGEMA, CEA, and WESTINGHOUSE (CAP program by French partners only). (author)

  17. Classification of working processes to facilitate occupational hazard coding on industrial trawlers

    DEFF Research Database (Denmark)

    Jensen, Olaf C; Stage, Søren; Noer, Preben

    2003-01-01

    BACKGROUND: Commercial fishing is an extremely dangerous economic activity. In order to more accurately describe the risks involved, a specific injury coding based on the working process was developed. METHOD: Observation on six different types of vessels was conducted and allowed a description...... and a classification of the principal working processes on all kinds of vessels and a detailed classification for industrial trawlers. In industrial trawling, fish are landed for processing purposes, for example, for the production of fish oil and fish meal. The classification was subsequently used to code...... the injuries reported to the Danish Maritime Authority over a 5-year period. RESULTS: On industrial trawlers, 374 of 394 (95%) injuries were captured by the classification. Setting out and hauling in the gear and nets were the processes with the most injuries and accounted for 58.9% of all injuries...

  18. Coding properties of three intrinsically distinct retinal ganglion cells under periodic stimuli: a computational study

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2016-09-01

    Full Text Available As the sole output neurons in the retina, ganglion cells play significant roles in transforming visual information into spike trains, and then transmitting them to the higher visual centers. However, coding strategies that retinal ganglion cells (RGCs adopt to accomplish these processes are not completely clear yet. To clarify these issues, we investigate the coding properties of three types of RGCs (repetitive spiking, tonic firing, and phasic firing by two different measures (spike-rate and spike-latency. Model results show that for periodic stimuli, repetitive spiking RGC and tonic RGC exhibit similar spike-rate patterns. Their spike-rates decrease gradually with increased stimulus frequency, moreover, variation of stimulus amplitude would change the two RGCs’ spike-rate patterns. For phasic RGC, it activates strongly at medium levels of frequency when the stimulus amplitude is low. While if high stimulus amplitude is applied, phasic RGC switches to respond strongly at low frequencies. These results suggest that stimulus amplitude is a prominent factor in regulating RGCs in encoding periodic signals. Similar conclusions can be drawn when analyzes spike-latency patterns of the three RGCs. More importantly, the above phenomena can be accurately reproduced by Hodgkin’s three classes of neurons, indicating that RGCs can perform the typical three classes of firing dynamics, depending on the distinctions of ion channel densities. Consequently, model results from the three RGCs may be not specific, but can also applicable to neurons in other brain regions which exhibit part(s or all of the Hodgkin’s three excitabilities.

  19. Load balancing in highly parallel processing of Monte Carlo code for particle transport

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Takemiya, Hiroshi; Kawasaki, Takuji

    1998-01-01

    In parallel processing of Monte Carlo (MC) codes for neutron, photon and electron transport problems, particle histories are assigned to processors making use of independency of the calculation for each particle. Although we can easily parallelize main part of a MC code by this method, it is necessary and practically difficult to optimize the code concerning load balancing in order to attain high speedup ratio in highly parallel processing. In fact, the speedup ratio in the case of 128 processors remains in nearly one hundred times when using the test bed for the performance evaluation. Through the parallel processing of the MCNP code, which is widely used in the nuclear field, it is shown that it is difficult to attain high performance by static load balancing in especially neutron transport problems, and a load balancing method, which dynamically changes the number of assigned particles minimizing the sum of the computational and communication costs, overcomes the difficulty, resulting in nearly fifteen percentage of reduction for execution time. (author)

  20. A Fast MHD Code for Gravitationally Stratified Media using Graphical Processing Units: SMAUG

    Science.gov (United States)

    Griffiths, M. K.; Fedun, V.; Erdélyi, R.

    2015-03-01

    Parallelization techniques have been exploited most successfully by the gaming/graphics industry with the adoption of graphical processing units (GPUs), possessing hundreds of processor cores. The opportunity has been recognized by the computational sciences and engineering communities, who have recently harnessed successfully the numerical performance of GPUs. For example, parallel magnetohydrodynamic (MHD) algorithms are important for numerical modelling of highly inhomogeneous solar, astrophysical and geophysical plasmas. Here, we describe the implementation of SMAUG, the Sheffield Magnetohydrodynamics Algorithm Using GPUs. SMAUG is a 1-3D MHD code capable of modelling magnetized and gravitationally stratified plasma. The objective of this paper is to present the numerical methods and techniques used for porting the code to this novel and highly parallel compute architecture. The methods employed are justified by the performance benchmarks and validation results demonstrating that the code successfully simulates the physics for a range of test scenarios including a full 3D realistic model of wave propagation in the solar atmosphere.

  1. Implementation of a dry process fuel cycle model into the DYMOND code

    International Nuclear Information System (INIS)

    Park, Joo Hwan; Jeong, Chang Joon; Choi, Hang Bok

    2004-01-01

    For the analysis of a dry process fuel cycle, new modules were implemented into the fuel cycle analysis code DYMOND, which was developed by the Argonne National Laboratory. The modifications were made to the energy demand prediction model, a Canada Deuterium Uranium (CANDU) reactor, direct use of spent Pressurized Water Reactor (PWR) fuel in CANDU reactors (DUPIC) fuel cycle model, the fuel cycle calculation module, and the input/output modules. The performance of the modified DYMOND code was assessed for the postulated once-through fuel cycle models including both the PWR and CANDU reactor. This paper presents modifications of the DYMOND code and the results of sample calculations for the PWR once-through and DUPIC fuel cycles

  2. A Deformation Analysis Code of CANDU Fuel under the Postulated Accident: ELOCA

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan; Jung, Jong Yeob

    2006-11-15

    Deformations of the fuel element or fuel channel might be the main cause of the fuel failure. Therefore, the accurate prediction of the deformation and the analysis capabilities are closely related to the increase of the safety margin of the reactor. In this report, among the performance analysis or the transient behavior prediction computer codes, the analysis codes for deformation such as the ELOCA, HOTSPOT, CONTACT-1, and PTDFORM are briefly introduced and each code's objectives, applicability, and relations are explained. Especially, the user manual for ELOCA code which is the analysis code for the fuel deformation and the release of fission product during the transient period after the postulated accidents is provided so that it can be the guidance to the potential users of the code and save the time and economic loss by reducing the trial and err000.

  3. A Deformation Analysis Code of CANDU Fuel under the Postulated Accident: ELOCA

    International Nuclear Information System (INIS)

    Park, Joo Hwan; Jung, Jong Yeob

    2006-11-01

    Deformations of the fuel element or fuel channel might be the main cause of the fuel failure. Therefore, the accurate prediction of the deformation and the analysis capabilities are closely related to the increase of the safety margin of the reactor. In this report, among the performance analysis or the transient behavior prediction computer codes, the analysis codes for deformation such as the ELOCA, HOTSPOT, CONTACT-1, and PTDFORM are briefly introduced and each code's objectives, applicability, and relations are explained. Especially, the user manual for ELOCA code which is the analysis code for the fuel deformation and the release of fission product during the transient period after the postulated accidents is provided so that it can be the guidance to the potential users of the code and save the time and economic loss by reducing the trial and error

  4. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    International Nuclear Information System (INIS)

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-01-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096 3 effective resolution and 16 GPUs with 8192 3 effective resolution, respectively.

  5. A System Structure for a VHTR-SI Process Dynamic Simulation Code

    International Nuclear Information System (INIS)

    Chang, Jiwoon; Shin, Youngjoon; Kim, Jihwan; Lee, Kiyoung; Lee, Wonjae; Chang, Jonghwa; Youn, Cheung

    2008-01-01

    The VHTR-SI process dynamic simulation code embedded in a mathematical solution engine is an application software system that simulates the dynamic behavior of the VHTR-SI process. Also, the software system supports a user friendly graphical user interface (GUI) for user input/out. Structured analysis techniques were developed in the late 1970s by Yourdon, DeMarco, Gane and Sarson for applying a systematic approach to a systems analysis. It included the use of data flow diagrams and data modeling and fostered the use of an implementation-independent graphical notation for a documentation. In this paper, we present a system structure for a VHRT-SI process dynamic simulation code by using the methodologies of structured analysis

  6. Effects of Secondary Task Modality and Processing Code on Automation Trust and Utilization During Simulated Airline Luggage Screening

    Science.gov (United States)

    Phillips, Rachel; Madhavan, Poornima

    2010-01-01

    The purpose of this research was to examine the impact of environmental distractions on human trust and utilization of automation during the process of visual search. Participants performed a computer-simulated airline luggage screening task with the assistance of a 70% reliable automated decision aid (called DETECTOR) both with and without environmental distractions. The distraction was implemented as a secondary task in either a competing modality (visual) or non-competing modality (auditory). The secondary task processing code either competed with the luggage screening task (spatial code) or with the automation's textual directives (verbal code). We measured participants' system trust, perceived reliability of the system (when a target weapon was present and absent), compliance, reliance, and confidence when agreeing and disagreeing with the system under both distracted and undistracted conditions. Results revealed that system trust was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Perceived reliability of the system (when the target was present) was significantly higher when the secondary task was visual rather than auditory. Compliance with the aid increased in all conditions except for the auditory-verbal condition, where it decreased. Similar to the pattern for trust, reliance on the automation was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Confidence when agreeing with the system decreased with the addition of any kind of distraction; however, confidence when disagreeing increased with the addition of an auditory secondary task but decreased with the addition of a visual task. A model was developed to represent the research findings and demonstrate the relationship between secondary task modality, processing code, and automation use. Results suggest that the nature of environmental distractions influence

  7. Multi-processing CTH: Porting legacy FORTRAN code to MP hardware

    Energy Technology Data Exchange (ETDEWEB)

    Bell, R.L.; Elrick, M.G.; Hertel, E.S. Jr.

    1996-12-31

    CTH is a family of codes developed at Sandia National Laboratories for use in modeling complex multi-dimensional, multi-material problems that are characterized by large deformations and/or strong shocks. A two-step, second-order accurate Eulerian solution algorithm is used to solve the mass, momentum, and energy conservation equations. CTH has historically been run on systems where the data are directly accessible to the cpu, such as workstations and vector supercomputers. Multiple cpus can be used if all data are accessible to all cpus. This is accomplished by placing compiler directives or subroutine calls within the source code. The CTH team has implemented this scheme for Cray shared memory machines under the Unicos operating system. This technique is effective, but difficult to port to other (similar) shared memory architectures because each vendor has a different format of directives or subroutine calls. A different model of high performance computing is one where many (> 1,000) cpus work on a portion of the entire problem and communicate by passing messages that contain boundary data. Most, if not all, codes that run effectively on parallel hardware were written with a parallel computing paradigm in mind. Modifying an existing code written for serial nodes poses a significantly different set of challenges that will be discussed. CTH, a legacy FORTRAN code, has been modified to allow for solutions on distributed memory parallel computers such as the IBM SP2, the Intel Paragon, Cray T3D, or a network of workstations. The message passing version of CTH will be discussed and example calculations will be presented along with performance data. Current timing studies indicate that CTH is 2--3 times faster than equivalent C++ code written specifically for parallel hardware. CTH on the Intel Paragon exhibits linear speed up with problems that are scaled (constant problem size per node) for the number of parallel nodes.

  8. User input verification and test driven development in the NJOY21 nuclear data processing code

    Energy Technology Data Exchange (ETDEWEB)

    Trainer, Amelia Jo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Conlin, Jeremy Lloyd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); McCartney, Austin Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-21

    Before physically-meaningful data can be used in nuclear simulation codes, the data must be interpreted and manipulated by a nuclear data processing code so as to extract the relevant quantities (e.g. cross sections and angular distributions). Perhaps the most popular and widely-trusted of these processing codes is NJOY, which has been developed and improved over the course of 10 major releases since its creation at Los Alamos National Laboratory in the mid-1970’s. The current phase of NJOY development is the creation of NJOY21, which will be a vast improvement from its predecessor, NJOY2016. Designed to be fast, intuitive, accessible, and capable of handling both established and modern formats of nuclear data, NJOY21 will address many issues that many NJOY users face, while remaining functional for those who prefer the existing format. Although early in its development, NJOY21 is quickly providing input validation to check user input. By providing rapid and helpful responses to users while writing input files, NJOY21 will prove to be more intuitive and easy to use than any of its predecessors. Furthermore, during its development, NJOY21 is subject to regular testing, such that its test coverage must strictly increase with the addition of any production code. This thorough testing will allow developers and NJOY users to establish confidence in NJOY21 as it gains functionality. This document serves as a discussion regarding the current state input checking and testing practices of NJOY21.

  9. Hospital Coding Practice, Data Quality, and DRG-Based Reimbursement under the Thai Universal Coverage Scheme

    Science.gov (United States)

    Pongpirul, Krit

    2011-01-01

    In the Thai Universal Coverage scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group (DRG) reimbursement. Questionable quality of the submitted DRG codes has been of concern whereas knowledge about hospital coding practice has been lacking. The objectives of this thesis are (1) To explore hospital coding…

  10. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Fossorier Marc

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope -ary phase shift key ( -PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded -PSK signaling (with . Then, it is extended to include coded -PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded -PSK signaling performs 3.1 to 5.2 dB better than uncoded -PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  11. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Marc Fossorier

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope M-ary phase shift key (M-PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded M-PSK signaling (with M=2k. Then, it is extended to include coded M-PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded M-PSK signaling performs 3.1 to 5.2 dB better than uncoded M-PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  12. Benchmarking of codes for electron cyclotron heating and electron cyclotron current drive under ITER conditions

    NARCIS (Netherlands)

    Prater, R.; Farina, D.; Gribov, Y.; Harvey, R. W.; Ram, A. K.; Lin-Liu, Y. R.; Poli, E.; Smirnov, A. P.; Volpe, F.; Westerhof, E.; Zvonkovo, A.

    2008-01-01

    Optimal design and use of electron cyclotron heating requires that accurate and relatively quick computer codes be available for prediction of wave coupling, propagation, damping and current drive at realistic levels of EC power. To this end, a number of codes have been developed in laboratories

  13. PSP: rapid identification of orthologous coding genes under positive selection across multiple closely related prokaryotic genomes.

    Science.gov (United States)

    Su, Fei; Ou, Hong-Yu; Tao, Fei; Tang, Hongzhi; Xu, Ping

    2013-12-27

    With genomic sequences of many closely related bacterial strains made available by deep sequencing, it is now possible to investigate trends in prokaryotic microevolution. Positive selection is a sub-process of microevolution, in which a particular mutation is favored, causing the allele frequency to continuously shift in one direction. Wide scanning of prokaryotic genomes has shown that positive selection at the molecular level is much more frequent than expected. Genes with significant positive selection may play key roles in bacterial adaption to different environmental pressures. However, selection pressure analyses are computationally intensive and awkward to configure. Here we describe an open access web server, which is designated as PSP (Positive Selection analysis for Prokaryotic genomes) for performing evolutionary analysis on orthologous coding genes, specially designed for rapid comparison of dozens of closely related prokaryotic genomes. Remarkably, PSP facilitates functional exploration at the multiple levels by assignments and enrichments of KO, GO or COG terms. To illustrate this user-friendly tool, we analyzed Escherichia coli and Bacillus cereus genomes and found that several genes, which play key roles in human infection and antibiotic resistance, show significant evidence of positive selection. PSP is freely available to all users without any login requirement at: http://db-mml.sjtu.edu.cn/PSP/. PSP ultimately allows researchers to do genome-scale analysis for evolutionary selection across multiple prokaryotic genomes rapidly and easily, and identify the genes undergoing positive selection, which may play key roles in the interactions of host-pathogen and/or environmental adaptation.

  14. The modeling of fuel rod behaviour under RIA conditions in the code DYN3D

    International Nuclear Information System (INIS)

    Rohde, U.

    2001-01-01

    A description of the fuel rod behaviour and heat transfer model used in the code DYN3D for nuclear reactor core dynamic simulations is given. Besides the solution of heat conduction equations in fuel and cladding, the model comprises a detailed description of heat transfer in the gas gap by conduction, radiation and fuel-cladding contact. The gas gap behaviour is modeled in a mechanistic way taking into account transient changes of the gas gap parameters based on given conditions for the initial state. Thermal, elastic and plastic deformations of fuel and cladding are taken into account within 1D approximation. A creeping law for time-dependent estimation of plastic deformations is implemented. Metal-water reaction of the cladding material in the high temperature region is considered. The cladding-coolant heat transfer regime map covers the region from one-phase liquid convection to dispersed flow with superheated steam. Special emphasis is put on taking into account the impact of thermodynamic non-equilibrium conditions on heat transfer. For the validation of the model, experiments on fuel rod behaviour during RIAs carried out in Russian and Japanese pulsed research reactors with shortened probes of fresh fuel rods are calculated. Comparisons between calculated and measured results are shown and discussed. It is shown, that the fuel rod behaviour is significantly influenced by plastic deformation of the cladding, post crisis heat transfer with sub-cooled liquid conditions and heat release from the metal-water reaction. Numerical studies concerning the fuel rod behaviour under RIA conditions in power reactors are reported on. It is demonstrated, that the fuel rod behaviour at high pressures and flow rates in power reactors is different from the behaviour under atmospheric pressure and stagnant flow conditions in the experiments. The mechanisms of fuel rod failure for fresh and burned fuel reported from the literature can be qualitatively reproduced by the DYN3D

  15. Concreteness effects in semantic processing: ERP evidence supporting dual-coding theory.

    Science.gov (United States)

    Kounios, J; Holcomb, P J

    1994-07-01

    Dual-coding theory argues that processing advantages for concrete over abstract (verbal) stimuli result from the operation of 2 systems (i.e., imaginal and verbal) for concrete stimuli, rather than just 1 (for abstract stimuli). These verbal and imaginal systems have been linked with the left and right hemispheres of the brain, respectively. Context-availability theory argues that concreteness effects result from processing differences in a single system. The merits of these theories were investigated by examining the topographic distribution of event-related brain potentials in 2 experiments (lexical decision and concrete-abstract classification). The results were most consistent with dual-coding theory. In particular, different scalp distributions of an N400-like negativity were elicited by concrete and abstract words.

  16. Development of CAD-Based Geometry Processing Module for a Monte Carlo Particle Transport Analysis Code

    International Nuclear Information System (INIS)

    Choi, Sung Hoon; Kwark, Min Su; Shim, Hyung Jin

    2012-01-01

    As The Monte Carlo (MC) particle transport analysis for a complex system such as research reactor, accelerator, and fusion facility may require accurate modeling of the complicated geometry. Its manual modeling by using the text interface of a MC code to define the geometrical objects is tedious, lengthy and error-prone. This problem can be overcome by taking advantage of modeling capability of the computer aided design (CAD) system. There have been two kinds of approaches to develop MC code systems utilizing the CAD data: the external format conversion and the CAD kernel imbedded MC simulation. The first approach includes several interfacing programs such as McCAD, MCAM, GEOMIT etc. which were developed to automatically convert the CAD data into the MCNP geometry input data. This approach makes the most of the existing MC codes without any modifications, but implies latent data inconsistency due to the difference of the geometry modeling system. In the second approach, a MC code utilizes the CAD data for the direct particle tracking or the conversion to an internal data structure of the constructive solid geometry (CSG) and/or boundary representation (B-rep) modeling with help of a CAD kernel. MCNP-BRL and OiNC have demonstrated their capabilities of the CAD-based MC simulations. Recently we have developed a CAD-based geometry processing module for the MC particle simulation by using the OpenCASCADE (OCC) library. In the developed module, CAD data can be used for the particle tracking through primitive CAD surfaces (hereafter the CAD-based tracking) or the internal conversion to the CSG data structure. In this paper, the performances of the text-based model, the CAD-based tracking, and the internal CSG conversion are compared by using an in-house MC code, McSIM, equipped with the developed CAD-based geometry processing module

  17. LACAN Code for global simulation of SILVA laser isotope separation process

    International Nuclear Information System (INIS)

    Quaegebeur, J.P.; Goldstein, S.

    1991-01-01

    Functions used for the definition of a SILVA separator require quite a lot of dimensional and operating parameters. Sizing a laser isotope separation plant needs the determination of these parameters for optimization. In the LACAN simulation code, each elementary physical process is described by simplified models. An example is given for a uranium isotope separation plant whose separation power is optimized with 6 parameters [fr

  18. Development of the GUI environments of MIDAS code for convenient input and output processing

    International Nuclear Information System (INIS)

    Kim, K. L.; Kim, D. H.

    2003-01-01

    MIDAS is being developed at KAERI as an integrated Severe Accident Analysis Code with easy model modification and addition by restructuring the data transfer scheme. In this paper, the input file management system, IEDIT and graphic simulation system, SATS, are presented as MIDAS input and output GUI systems. These two systems would form the basis of the MIDAS GUI system for input and output processing, and they are expected to be useful tools for severe accidents analysis and simulation

  19. Validation of the TRACR3D code for soil water flow under saturated/unsaturated conditions in three experiments

    International Nuclear Information System (INIS)

    Perkins, B.; Travis, B.; DePoorter, G.

    1985-01-01

    Validation of the TRACR3D code in a one-dimensional form was obtained for flow of soil water in three experiments. In the first experiment, a pulse of water entered a crushed-tuff soil and initially moved under conditions of saturated flow, quickly followed by unsaturated flow. In the second experiment, steady-state unsaturated flow took place. In the final experiment, two slugs of water entered crushed tuff under field conditions. In all three experiments, experimentally measured data for volumetric water content agreed, within experimental errors, with the volumetric water content predicted by the code simulations. The experiments and simulations indicated the need for accurate knowledge of boundary and initial conditions, amount and duration of moisture input, and relevant material properties as input into the computer code. During the validation experiments, limitations on monitoring of water movement in waste burial sites were also noted. 5 references, 34 figures, 9 tables

  20. Formation process of Malaysian modern architecture under influence of nationalism

    OpenAIRE

    宇高, 雄志; 山崎, 大智

    2001-01-01

    This paper examines the Formation Process of Malaysian Modern Architecture under Influence of Nationalism,through the process of independence of Malaysia. The national style as "Malaysian national architecture" which hasengaged on background of political environment under the post colonial situation. Malaysian urban design is alsodetermined under the balance of both of ethnic culture and the national culture. In Malaysia, they decided to choosethe Malay ethnic culture as the national culture....

  1. Aspects of a generic photovoltaic model examined under the German grid code for medium voltage

    Energy Technology Data Exchange (ETDEWEB)

    Theologitis, Ioannis-Thomas; Troester, Eckehard; Ackermann, Thomas [Energynautics GmbH, Langen (Germany)

    2011-07-01

    The increasing peneration of photovoltaic power systems into the power grid has attached attention to the issue of ensuring the smooth absorbance of the solar energy, while securing the normal and steady operation of the grid as well. Nowadays, the PV systems must meet a number of technical requirements to address this issue. This paper investigates a generic grid-connected photovoltaic model that was developed by DIgSILENT and is part of the library in the new version of PowerFactory v.14.1 software that is used in this study. The model has a nominal rated peak power of 0.5 MVA and a designed power factor cos{phi}0.95. The study focuses on the description of the model, its control system and its ability to reflect important requirements that a grid-connected PV system should have by January 2011 according to the German grid code for medium voltage. The model undergoes various simulations. Static voltage support, active power control and dynamic voltage support - Fault Ride Through (FRT) is examined. The results show that the generic model is capable for active power reduction under over-frequency occasions and FRT behavior in cases of voltage dips. The reactive power control that is added in the model improves the control system and makes the model capable for static voltage support in sudden active power injection changes at the point of common coupling. Beside the simplifications and shortcomings of this generic model, basic requirements of the modern PV systems can be addressed. Further improvements could make it more complete and applicable for more detailed studies. (orig.)

  2. NADAC and MERGE: computer codes for processing neutron activation analysis data

    International Nuclear Information System (INIS)

    Heft, R.E.; Martin, W.E.

    1977-01-01

    Absolute disintegration rates of specific radioactive products induced by neutron irradition of a sample are determined by spectrometric analysis of gamma-ray emissions. Nuclide identification and quantification is carried out by a complex computer code GAMANAL (described elsewhere). The output of GAMANAL is processed by NADAC, a computer code that converts the data on observed distintegration rates to data on the elemental composition of the original sample. Computations by NADAC are on an absolute basis in that stored nuclear parameters are used rather than the difference between the observed disintegration rate and the rate obtained by concurrent irradiation of elemental standards. The NADAC code provides for the computation of complex cases including those involving interrupted irradiations, parent and daughter decay situations where the daughter may also be produced independently, nuclides with very short half-lives compared to counting interval, and those involving interference by competing neutron-induced reactions. The NADAC output consists of a printed report, which summarizes analytical results, and a card-image file, which can be used as input to another computer code MERGE. The purpose of MERGE is to combine the results of multiple analyses and produce a single final answer, based on all available information, for each element found

  3. The Cortical Organization of Speech Processing: Feedback Control and Predictive Coding the Context of a Dual-Stream Model

    Science.gov (United States)

    Hickok, Gregory

    2012-01-01

    Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…

  4. Verification of Dinamika-5 code on experimental data of water level behaviour in PGV-440 under dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Beljaev, Y.V.; Zaitsev, S.I.; Tarankov, G.A. [OKB Gidropress (Russian Federation)

    1995-12-31

    Comparison of the results of calculational analysis with experimental data on water level behaviour in horizontal steam generator (PGV-440) under the conditions with cessation of feedwater supply is presented in the report. Calculational analysis is performed using DIMANIKA-5 code, experimental data are obtained at Kola NPP-4. (orig.). 2 refs.

  5. Verification of Dinamika-5 code on experimental data of water level behaviour in PGV-440 under dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Beljaev, Y V; Zaitsev, S I; Tarankov, G A [OKB Gidropress (Russian Federation)

    1996-12-31

    Comparison of the results of calculational analysis with experimental data on water level behaviour in horizontal steam generator (PGV-440) under the conditions with cessation of feedwater supply is presented in the report. Calculational analysis is performed using DIMANIKA-5 code, experimental data are obtained at Kola NPP-4. (orig.). 2 refs.

  6. Review of design codes of concrete encased steel short columns under axial compression

    Directory of Open Access Journals (Sweden)

    K.Z. Soliman

    2013-08-01

    Full Text Available In recent years, the use of encased steel concrete columns has been increased significantly in medium-rise or high-rise buildings. The aim of the present investigation is to assess experimentally the current methods and codes for evaluating the ultimate load behavior of concrete encased steel short columns. The current state of design provisions for composite columns from the Egyptian codes ECP203-2007 and ECP-SC-LRFD-2012, as well as, American Institute of Steel Construction, AISC-LRFD-2010, American Concrete Institute, ACI-318-2008, and British Standard BS-5400-5 was reviewed. The axial capacity portion of both the encased steel section and the concrete section was also studied according to the previously mentioned codes. Ten encased steel concrete columns have been investigated experimentally to study the effect of concrete confinement and different types of encased steel sections. The measured axial capacity of the tested ten composite columns was compared with the values calculated by the above mentioned codes. It is concluded that non-negligible discrepancies exist between codes and the experimental results as the confinement effect was not considered in predicting both the strength and ductility of concrete. The confining effect was obviously influenced by the shape of the encased steel section. The tube-shaped steel section leads to better confinement than the SIB section. Among the used codes, the ECP-SC-LRFD-2012 led to the most conservative results.

  7. Comparative performance evaluation of transform coding in image pre-processing

    Science.gov (United States)

    Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha

    2017-07-01

    We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.

  8. FARST: A computer code for the evaluation of FBR fuel rod behavior under steady-state/transient conditions

    International Nuclear Information System (INIS)

    Nakamura, M.; Sakagami, M.

    1984-01-01

    FARST, a computer code for the evaluation of fuel rod thermal and mechanical behavior under steady-state/transient conditions has been developed. The code characteristics are summarized as follows: (I) FARST evaluates the fuel rod behavior under the transient conditions. The code analyzes thermal and mechanical phenomena within a fuel rod, taking into account the temperature change in coolant surrounding the fuel rod. (II) Permanent strains such as plastic, creep and swelling strains as well as thermoelastic deformations can be analyzed by using the strain increment method. (III) Axial force and contact pressure which act on the fuel stack and cladding are analyzed based on the stick/slip conditions. (IV) FARST used a pellet swelling model which depends on the contact pressure between pellet and cladding, and an empirical pellet relocation model, designated as 'jump relocation model'. The code was successfully applied to analyses of the fuel rod irradiation data from pulse reactor for nuclear safety research in Cadarache (CABRI) and pulse reactor for nuclear safety research in Japan Atomic Energy Research Institute (NSRR). The code was further applied to stress analysis of a 1000 MW class large FBR plant fuel rod during transient conditions. The steady-state model which was used so far gave the conservative results for cladding stress during overpower transient, but underestimated the results for cladding stress during a rapid temperature decrease of coolant. (orig.)

  9. Performance Analysis of a De-correlated Modified Code Tracking Loop for Synchronous DS-CDMA System under Multiuser Environment

    Science.gov (United States)

    Wu, Ya-Ting; Wong, Wai-Ki; Leung, Shu-Hung; Zhu, Yue-Sheng

    This paper presents the performance analysis of a De-correlated Modified Code Tracking Loop (D-MCTL) for synchronous direct-sequence code-division multiple-access (DS-CDMA) systems under multiuser environment. Previous studies have shown that the imbalance of multiple access interference (MAI) in the time lead and time lag portions of the signal causes tracking bias or instability problem in the traditional correlating tracking loop like delay lock loop (DLL) or modified code tracking loop (MCTL). In this paper, we exploit the de-correlating technique to combat the MAI at the on-time code position of the MCTL. Unlike applying the same technique to DLL which requires an extensive search algorithm to compensate the noise imbalance which may introduce small tracking bias under low signal-to-noise ratio (SNR), the proposed D-MCTL has much lower computational complexity and exhibits zero tracking bias for the whole range of SNR, regardless of the number of interfering users. Furthermore, performance analysis and simulations based on Gold codes show that the proposed scheme has better mean square tracking error, mean-time-to-lose-lock and near-far resistance than the other tracking schemes, including traditional DLL (T-DLL), traditional MCTL (T-MCTL) and modified de-correlated DLL (MD-DLL).

  10. [Implications of mental image processing in the deficits of verbal information coding during normal aging].

    Science.gov (United States)

    Plaie, Thierry; Thomas, Delphine

    2008-06-01

    Our study specifies the contributions of image generation and image maintenance processes occurring at the time of imaginal coding of verbal information in memory during normal aging. The memory capacities of 19 young adults (average age of 24 years) and 19 older adults (average age of 75 years) were assessed using recall tasks according to the imagery value of the stimuli to learn. The mental visual imagery capacities are assessed using tasks of image generation and temporary storage of mental imagery. The variance analysis indicates a more important decrease with age of the concretness effect. The major contribution of our study rests on the fact that the decline with age of dual coding of verbal information in memory would result primarily from the decline of image maintenance capacities and from a slowdown in image generation. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  11. A multidisciplinary audit of clinical coding accuracy in otolaryngology: financial, managerial and clinical governance considerations under payment-by-results.

    Science.gov (United States)

    Nouraei, S A R; O'Hanlon, S; Butler, C R; Hadovsky, A; Donald, E; Benjamin, E; Sandhu, G S

    2009-02-01

    To audit the accuracy of otolaryngology clinical coding and identify ways of improving it. Prospective multidisciplinary audit, using the 'national standard clinical coding audit' methodology supplemented by 'double-reading and arbitration'. Teaching-hospital otolaryngology and clinical coding departments. Otolaryngology inpatient and day-surgery cases. Concordance between initial coding performed by a coder (first cycle) and final coding by a clinician-coder multidisciplinary team (MDT; second cycle) for primary and secondary diagnoses and procedures, and Health Resource Groupings (HRG) assignment. 1250 randomly-selected cases were studied. Coding errors occurred in 24.1% of cases (301/1250). The clinician-coder MDT reassigned 48 primary diagnoses and 186 primary procedures and identified a further 209 initially-missed secondary diagnoses and procedures. In 203 cases, patient's initial HRG changed. Incorrect coding caused an average revenue loss of 174.90 pounds per patient (14.7%) of which 60% of the total income variance was due to miscoding of a eight highly-complex head and neck cancer cases. The 'HRG drift' created the appearance of disproportionate resource utilisation when treating 'simple' cases. At our institution the total cost of maintaining a clinician-coder MDT was 4.8 times lower than the income regained through the double-reading process. This large audit of otolaryngology practice identifies a large degree of error in coding on discharge. This leads to significant loss of departmental revenue, and given that the same data is used for benchmarking and for making decisions about resource allocation, it distorts the picture of clinical practice. These can be rectified through implementing a cost-effective clinician-coder double-reading multidisciplinary team as part of a data-assurance clinical governance framework which we recommend should be established in hospitals.

  12. SSYST, a code-system for analysing transient LWR fuel rod behaviour under off-normal conditions

    International Nuclear Information System (INIS)

    Borgwaldt, H.; Gulden, W.

    1983-01-01

    SSYST is a code-system for analysing transient fuel rod behaviour under off-normal conditions, developed conjointly by the Institut fuer Kernenergetik und Energiesysteme (IKE), Stuttgart, and Kernforschungszentrum Karlsruhe (KfK) under contract of Projek Nukleare Sicherheit (PNS) at KfK. The main differences between SSYST and similar codes are (1) an open-ended modular code organisation, and (2) a preference for simple models, wherever possible. While the first feature makes SSYST a very flexible tool, easily adapted to changing requirements, the second feature leads to short execution times. The analysis of transient rod behaviour under LOCA boundary conditions takes 2 min cpu-time (IBM-3033), so that extensive parametric studies become possible. This paper gives an outline of the overall code organisation and a general overview of the physical models implemented. Besides explaining the routine application of SSYST in the analysis of loss-of-coolant accidents, examples are given of special applications which have led to a satisfactory understanding of the decisive influence of deviations from rotational symmetry on the fuel rod perimeter. (author)

  13. SSYST: A code-system for analyzing transient LWR fuel rod behaviour under off-normal conditions

    International Nuclear Information System (INIS)

    Borgwaldt, H.; Gulden, W.

    1983-01-01

    SSYST is a code-system for analyzing transient fuel rod behaviour under off-normal conditions, developed conjointly by the Institut fur Kernenergetik und Energiesysteme (IKE), Stuttgart, and Kernforschungszentrum Karlsruhe (KfK) under contract of Projekt Nukleare Sicherheit (PNS) at KfK. The main differences between SSYST and similar codes are an open-ended modular code organization, and a preference for simple models, wherever possible. While the first feature makes SSYST a very flexible tool, easily adapted to changing requirements, the second feature leads to short execution times. The analysis of transient rod behaviour under LOCA boundary conditions takes 2 min cpu-time (IBM-3033), so that extensive parametric studies become possible. This paper gives an outline of the overall code organisation and a general overview of the physical models implemented. Besides explaining the routine application of SSYST in the analysis of loss-of-coolant accidents, examples are given of special applications which have led to a satisfactory understanding of the decisive influence of deviations from rotational symmetry on the fuel rod perimeter

  14. Improved inter-assembly heat transfer modeling under low flow conditions for the Super System Code (SSC)

    International Nuclear Information System (INIS)

    Horak, W.C.; Guppy, J.G.

    1984-01-01

    The Super System Code (SSC) was developed at the Brookhaven National Laboratory (BNL) for the thermal hydraulic analysis of natural circulation transients, operational transients, and other system wide transients in nuclear power plants. SSC is a generic, best estimate code that models the in-vessel components, heat transport loops, plant protection systems and plant control systems. SSC also simulates the balance of plant when interfaced with the MINET code. SSC has been validated against both numerical and experimental data bases and is now used by several outside users. An important area of interest in LMFBR transient analysis is the prediction of the response of the reactor core under low flow conditions, such as experienced during a natural circulation event. Under these circumstances there are many physical phenomena which must be modeled to provide an adequate representation by a computer code simulation. The present version of SSC contains numerous models which account for most of the major phenomena. However, one area where the present model in SSC is being improved is in the representation of heat transfer and buoyancy effects under low flow operation. To properly improve the present version, the addition of models to represent certain inter-assembly effects is required

  15. Error analysis of supercritical water correlations using ATHLET system code under DHT conditions

    Energy Technology Data Exchange (ETDEWEB)

    Samuel, J., E-mail: jeffrey.samuel@uoit.ca [Univ. of Ontario Inst. of Tech., Oshawa, ON (Canada)

    2014-07-01

    The thermal-hydraulic computer code ATHLET (Analysis of THermal-hydraulics of LEaks and Transients) is used for analysis of anticipated and abnormal plant transients, including safety analysis of Light Water Reactors (LWRs) and Russian Graphite-Moderated High Power Channel-type Reactors (RBMKs). The range of applicability of ATHLET has been extended to supercritical water by updating the fluid-and transport-properties packages, thus enabling the code to the used in analysis of SuperCritical Water-cooled Reactors (SCWRs). Several well-known heat-transfer correlations for supercritical fluids were added to the ATHLET code and a numerical model was created to represent an experimental test section. In this work, the error in the Heat Transfer Coefficient (HTC) calculation by the ATHLET model is studied along with the ability of the various correlations to predict different heat transfer regimes. (author)

  16. Locating protein-coding sequences under selection for additional, overlapping functions in 29 mammalian genomes

    DEFF Research Database (Denmark)

    Lin, Michael F; Kheradpour, Pouya; Washietl, Stefan

    2011-01-01

    conservation compared to typical protein-coding genes—especially at synonymous sites. In this study, we use genome alignments of 29 placental mammals to systematically locate short regions within human ORFs that show conspicuously low estimated rates of synonymous substitution across these species. The 29......-species alignment provides statistical power to locate more than 10,000 such regions with resolution down to nine-codon windows, which are found within more than a quarter of all human protein-coding genes and contain ~2% of their synonymous sites. We collect numerous lines of evidence that the observed...... synonymous constraint in these regions reflects selection on overlapping functional elements including splicing regulatory elements, dual-coding genes, RNA secondary structures, microRNA target sites, and developmental enhancers. Our results show that overlapping functional elements are common in mammalian...

  17. ASPECTS CONCERNING THE JOINT VENTURE UNDER THE REGULATION OF THE NEW CIVIL CODE

    Directory of Open Access Journals (Sweden)

    Ana-Maria Lupulescu

    2013-11-01

    Full Text Available The New Civil Code makes the transition, for the first time in the Romanian legal system, from the duality to the unity of private law. Consequently, the Civil Code contains a legal regulation more structured and comprehensive, although not entirely safe from any criticism, in relation to the company, with particular reference to the simple company, regulation that expressly characterizes itself as the common law in this field. Within these general provisions, the legislator has considered the joint venture, to which, however, as in the previous regulation contained in the old Commercial Code – now repealed –, it does not devote too many legal provisions, in order to maintain the flexibility of this form of company. Therefore, this approach appears particularly useful for analysts in law and, especially, for practitioners, since it aims to achieve a comprehensive analysis of the joint venture, form of company with practical incidence.

  18. Stability of prebiotic, laminaran oligosaccharide under food processing conditions

    Science.gov (United States)

    Chamidah, A.

    2018-04-01

    Prebiotic stability tests on laminaran oligosaccharide under food processing conditions were urgently performed to determine the ability of prebiotics deal with processing. Laminaran, oligosaccharide is produced from enzymatic hydrolysis. To further apply this prebiotic, it is necessary to test its performance on food processing. Single prebiotic or in combination with probiotic can improve human digestive health. The effectiveness evaluation of prebiotic should be taken into account in regards its chemical and functional stabilities. This study aims to investigate the stability of laminaran, oligosaccharide under food processing condition.

  19. Fast dose assessment models, parameters and code under accident conditions for Qinshan Nuclear Power Plant

    International Nuclear Information System (INIS)

    Zhang, Z.Y.; Hu, E.B.; Meng, X.C.; Zhang, Y.; Yao, R.T.

    1993-01-01

    According to requirement of accident emergency plan for Qinshan Nuclear Power Plant, a Gaussian straight-line model was adopted for estimating radionuclide concentration in surface air. In addition, the effects of mountain body on atmospheric dispersion was considered. By combination of field atmospheric dispersion experiment and wind tunnel modeling test, necessary modifications have been done for some models and parameters. A computer code for assessment was written in Quick BASIC (V4.5) language. The radius of assessment region is 10 km and the code is applicable to early accident assessment. (1 tab.)

  20. Parallel processing is good for your scientific codes...But massively parallel processing is so much better

    International Nuclear Information System (INIS)

    Thomas, B.; Domain, Ch.; Souffez, Y.; Eon-Duval, P.

    1998-01-01

    Harnessing the power of many computers, to solve concurrently difficult scientific problems, is one of the most innovative trend in High Performance Computing. At EDF, we have invested in parallel computing and have achieved significant results. First we improved the processing speed of strategic codes, in order to extend their scope. Then we turned to numerical simulations at the atomic scale. These computations, we never dreamt of before, provided us with a better understanding of metallurgic phenomena. More precisely we were able to trace defects in alloys that are used in nuclear power plants. (author)

  1. An evaluation of the effect of JPEG, JPEG2000, and H.264/AVC on CQR codes decoding process

    Science.gov (United States)

    Vizcarra Melgar, Max E.; Farias, Mylène C. Q.; Zaghetto, Alexandre

    2015-02-01

    This paper presents a binarymatrix code based on QR Code (Quick Response Code), denoted as CQR Code (Colored Quick Response Code), and evaluates the effect of JPEG, JPEG2000 and H.264/AVC compression on the decoding process. The proposed CQR Code has three additional colors (red, green and blue), what enables twice as much storage capacity when compared to the traditional black and white QR Code. Using the Reed-Solomon error-correcting code, the CQR Code model has a theoretical correction capability of 38.41%. The goal of this paper is to evaluate the effect that degradations inserted by common image compression algorithms have on the decoding process. Results show that a successful decoding process can be achieved for compression rates up to 0.3877 bits/pixel, 0.1093 bits/pixel and 0.3808 bits/pixel for JPEG, JPEG2000 and H.264/AVC formats, respectively. The algorithm that presents the best performance is the H.264/AVC, followed by the JPEG2000, and JPEG.

  2. 76 FR 66235 - Bar Code Technologies for Drugs and Biological Products; Retrospective Review Under Executive...

    Science.gov (United States)

    2011-10-26

    ... interactions, overdoses, and patient allergies) and retail pharmacy-based computer systems that use a bar-coded... drugs. The goal of this initiative is to implement a system to further ensure patient safety and to..., and ideas on the need, maturity, and acceptability of alternative identification technologies for the...

  3. RODSWELL: a computer code for the thermomechanical analysis of fuel rods under LOCA conditions

    International Nuclear Information System (INIS)

    Casadei, F.; Laval, H.; Donea, J.; Jones, P.M.; Colombo, A.

    1984-01-01

    The code calculates the variation in space and time of all significant fuel rod variables, including fuel, gap and cladding temperature, fuel and cladding deformation, cladding oxidation and rod internal pressure. The code combines a transient 2-dimensional heat conduction code and a 1-dimensional mechanical model for the cladding deformation. The first sections of this report deal with the heat conduction model and the finite element discretization used for the thermal analysis. The mechanical deformation model is presented next: modelling of creep, phase change and oxidation of the zircaloy cladding is discussed in detail. A model describing the effect of oxidation and oxide cracking on the mechanical strength of the cladding is presented too. Next a mechanical restraint model, which allows the simulation of the presence of the neighbouring rods and is particularly important in assessing the amount of channel blockage during a transient, is presented. A description of the models used for the coolant conditions and for the power generation follows. The heat source can be placed either in the fuel or in the cladding, and direct or indirect clad heating by electrical power can be simulated. Then a section follows, dealing with the steady-state and transient types of calculation and with the automatic variable time step selection during the transient. The last sections deal with presentation of results, graphical output, test problems and an example of general application of the code

  4. Implementation of decommissioning materials conditional clearance process to the OMEGA calculation code

    International Nuclear Information System (INIS)

    Zachar, Matej; Necas, Vladimir; Daniska, Vladimir

    2011-01-01

    The activities performed during nuclear installation decommissioning process inevitably lead to the production of large amount of radioactive material to be managed. Significant part of materials has such low radioactivity level that allows them to be released to the environment without any restriction for further use. On the other hand, for materials with radioactivity slightly above the defined unconditional clearance level, there is a possibility to release them conditionally for a specific purpose in accordance with developed scenario assuring that radiation exposure limits for population not to be exceeded. The procedure of managing such decommissioning materials, mentioned above, could lead to recycling and reuse of more solid materials and to save the radioactive waste repository volume. In the paper an a implementation of the process of conditional release to the OMEGA Code is analyzed in details; the Code is used for calculation of decommissioning parameters. The analytical approach in the material parameters assessment, firstly, assumes a definition of radiological limit conditions, based on the evaluation of possible scenarios for conditionally released materials, and their application to appropriate sorter type in existing material and radioactivity flow system. Other calculation procedures with relevant technological or economical parameters, mathematically describing e.g. final radiation monitoring or transport outside the locality, are applied to the OMEGA Code in the next step. Together with limits, new procedures creating independent material stream allow evaluation of conditional material release process during decommissioning. Model calculations evaluating various scenarios with different input parameters and considering conditional release of materials to the environment are performed to verify the implemented methodology. Output parameters and results of the model assessment are presented, discussed and conduced in the final part of the paper

  5. Development of the object-oriented analysis code for the estimation of material balance in pyrochemical reprocessing process (2). Modification of the code for the analysis of holdup of nuclear materials in the process

    International Nuclear Information System (INIS)

    Okamura, Nobuo; Tanaka, Hiroshi

    2001-04-01

    Pyrochemical reprocessing is thought to be promising process for FBR fuel cycle mainly from the economical viewpoint. However, the material behavior in the process is not clarified enough because of the lack of experimental data. The authors have been developed the object-oriented analysis code for the estimation of material balance in the process, which has the flexible applicability for the change of process flow sheet. The objective of this study is to modify the code so as to analyze the holdup of nuclear materials in the pyrochemical process from the viewpoint of safeguard, because of the possibility of larger amount of the holdup in the process compared with aqueous process. As a result of the modification, the relationship between the production of nuclear materials and its holdup in the process can be evaluated by the code. (author)

  6. CXSFIT Code Application to Process Charge-Exchange Recombination Spectroscopy Data at the T-10 Tokamak

    Science.gov (United States)

    Serov, S. V.; Tugarinov, S. N.; Klyuchnikov, L. A.; Krupin, V. A.; von Hellermann, M.

    2017-12-01

    The applicability of the CXSFIT code to process experimental data from Charge-eXchange Recombination Spectroscopy (CXRS) diagnostics at the T-10 tokamak is studied with a view to its further use for processing experimental data at the ITER facility. The design and operating principle of the CXRS diagnostics are described. The main methods for processing the CXRS spectra of the 5291-Å line of C5+ ions at the T-10 tokamak (with and without subtraction of parasitic emission from the edge plasma) are analyzed. The method of averaging the CXRS spectra over several shots, which is used at the T-10 tokamak to increase the signal-to-noise ratio, is described. The approximation of the spectrum by a set of Gaussian components is used to identify the active CXRS line in the measured spectrum. Using the CXSFIT code, the ion temperature in ohmic discharges and discharges with auxiliary electron cyclotron resonance heating (ECRH) at the T-10 tokamak is calculated from the CXRS spectra of the 5291-Å line. The time behavior of the ion temperature profile in different ohmic heating modes is studied. The temperature profile dependence on the ECRH power is measured, and the dynamics of ECR removal of carbon nuclei from the T-10 plasma is described. Experimental data from the CXRS diagnostics at T-10 substantially contribute to the implementation of physical programs of studies on heat and particle transport in tokamak plasmas and investigation of geodesic acoustic mode properties.

  7. Enabling Ethical Code Embeddedness in Construction Organizations: A Review of Process Assessment Approach.

    Science.gov (United States)

    Oladinrin, Olugbenga Timo; Ho, Christabel Man-Fong

    2016-08-01

    Several researchers have identified codes of ethics (CoEs) as tools that stimulate positive ethical behavior by shaping the organisational decision-making process, but few have considered the information needed for code implementation. Beyond being a legal and moral responsibility, ethical behavior needs to become an organisational priority, which requires an alignment process that integrates employee behavior with the organisation's ethical standards. This paper discusses processes for the responsible implementation of CoEs based on an extensive review of the literature. The internationally recognized European Foundation for Quality Management Excellence Model (EFQM model) is proposed as a suitable framework for assessing an organisation's ethical performance, including CoE embeddedness. The findings presented herein have both practical and research implications. They will encourage construction practitioners to shift their attention from ethical policies to possible enablers of CoE implementation and serve as a foundation for further research on ethical performance evaluation using the EFQM model. This is the first paper to discuss the model's use in the context of ethics in construction practice.

  8. Automated processing of thermal infrared images of Osservatorio Vesuviano permanent surveillance network by using Matlab code

    Science.gov (United States)

    Sansivero, Fabio; Vilardo, Giuseppe; Caputo, Teresa

    2017-04-01

    The permanent thermal infrared surveillance network of Osservatorio Vesuviano (INGV) is composed of 6 stations which acquire IR frames of fumarole fields in the Campi Flegrei caldera and inside the Vesuvius crater (Italy). The IR frames are uploaded to a dedicated server in the Surveillance Center of Osservatorio Vesuviano in order to process the infrared data and to excerpt all the information contained. In a first phase the infrared data are processed by an automated system (A.S.I.R.A. Acq- Automated System of IR Analysis and Acquisition) developed in Matlab environment and with a user-friendly graphic user interface (GUI). ASIRA daily generates time-series of residual temperature values of the maximum temperatures observed in the IR scenes after the removal of seasonal effects. These time-series are displayed in the Surveillance Room of Osservatorio Vesuviano and provide information about the evolution of shallow temperatures field of the observed areas. In particular the features of ASIRA Acq include: a) efficient quality selection of IR scenes, b) IR images co-registration in respect of a reference frame, c) seasonal correction by using a background-removal methodology, a) filing of IR matrices and of the processed data in shared archives accessible to interrogation. The daily archived records can be also processed by ASIRA Plot (Matlab code with GUI) to visualize IR data time-series and to help in evaluating inputs parameters for further data processing and analysis. Additional processing features are accomplished in a second phase by ASIRA Tools which is Matlab code with GUI developed to extract further information from the dataset in automated way. The main functions of ASIRA Tools are: a) the analysis of temperature variations of each pixel of the IR frame in a given time interval, b) the removal of seasonal effects from temperature of every pixel in the IR frames by using an analytic approach (removal of sinusoidal long term seasonal component by using a

  9. Processes underlying treatment success and failure in assertive community treatment.

    Science.gov (United States)

    Stull, Laura G; McGrew, John H; Salyers, Michelle P

    2012-02-01

    Processes underlying success and failure in assertive community treatment (ACT), a widely investigated treatment model for persons with severe mental illness, are poorly understood. The purpose of the current study was to examine processes in ACT by (1) understanding how consumers and staff describe the processes underlying treatment success and failure and (2) comparing processes identified by staff and consumers. Investigators conducted semi-structured interviews with 25 staff and 23 consumers from four ACT teams. Both staff and consumers identified aspects of the ACT team itself as the most critical in the process of consumer success. For failure, consumers identified consumer characteristics as most critical and staff identified lack of social relationships. Processes underlying failure were not viewed as merely the opposite of processes underlying success. In addition, there was notable disagreement between staff and consumers on important processes. Findings overlap with critical ingredients identified in previous studies, including aspects of the ACT team, social involvement and employment. In contrast to prior studies, there was little emphasis on hospitalizations and greater emphasis on not abusing substances, obtaining wants and desires, and consumer characteristics.

  10. Dispute settlement process under GATT/WTO diplomatic or judicial ...

    African Journals Online (AJOL)

    This paper probes the mechanisms of the dispute resolution process under the World Trade Organisation (WTO) and the General Agreement on Tariff and Trade (GATT). It tries to analyse the evolution of the dispute process which was initially based on diplomatic procedures and gives an account of its evolution and ...

  11. Tunable wavefront coded imaging system based on detachable phase mask: Mathematical analysis, optimization and underlying applications

    Science.gov (United States)

    Zhao, Hui; Wei, Jingxuan

    2014-09-01

    The key to the concept of tunable wavefront coding lies in detachable phase masks. Ojeda-Castaneda et al. (Progress in Electronics Research Symposium Proceedings, Cambridge, USA, July 5-8, 2010) described a typical design in which two components with cosinusoidal phase variation operate together to make defocus sensitivity tunable. The present study proposes an improved design and makes three contributions: (1) A mathematical derivation based on the stationary phase method explains why the detachable phase mask of Ojeda-Castaneda et al. tunes the defocus sensitivity. (2) The mathematical derivations show that the effective bandwidth wavefront coded imaging system is also tunable by making each component of the detachable phase mask move asymmetrically. An improved Fisher information-based optimization procedure was also designed to ascertain the optimal mask parameters corresponding to specific bandwidth. (3) Possible applications of the tunable bandwidth are demonstrated by simulated imaging.

  12. Verification of aero-elastic offshore wind turbine design codes under IEA Wind Task XXIII

    DEFF Research Database (Denmark)

    Vorpahl, Fabian; Strobel, Michael; Jonkman, Jason M.

    2014-01-01

    with the incident waves, sea current, hydrodynamics and foundation dynamics of the support structure. A large set of time series simulation results such as turbine operational characteristics, external conditions, and load and displacement outputs was compared and interpreted. Load cases were defined and run...... to differences in the model fidelity, aerodynamic implementation, hydrodynamic load discretization and numerical difficulties within the codes. The comparisons resulted in a more thorough understanding of the modeling techniques and better knowledge of when various approximations are not valid.More importantly...... is to summarize the lessons learned and present results that code developers can compare to. The set of benchmark load cases defined and simulated during the course of this project—the raw data for this paper—is available to the offshore wind turbine simulation community and is already being used for testing...

  13. Modelling of WWER-440 fuel rod behaviour under operational conditions with the PIN-micro code

    Energy Technology Data Exchange (ETDEWEB)

    Stefanova, S; Vitkova, M; Simeonova, V; Passage, G; Manolova, M [Institute for Nuclear Research and Nuclear Energy, Sofia (Bulgaria); Haralampieva, Z [National Electric Company Ltd., Kozloduy (Bulgaria); Scheglov, A; Proselkov, V [Institute of Nuclear Reactors, RSC Kurchatov Inst., Moscow (Russian Federation)

    1997-08-01

    The report summarizes the first practical experience obtained by fuel rod performance modelling at the Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences. The results of application of the PIN-micro code and the code modification PINB1 for thermomechanical analysis of WWER-440 fuel assemblies (FAs) are presented. The aim of this analysis is to study the fuel rod behaviour of the operating WWER reactors. The performance of two FAs with maximal linear power and varying geometrical and technological parameters is analyzed. On the basis of recent publications on WWER fuel performance modelling at extended burnup, a modified PINB1 version of the standard PIN-micro code is shortly described and applied for the selected FAs. Comparison of the calculated results is performed. The PINB1 version predicts higher fuel temperatures and more adequate FGR rate, accounting for the extended burnup. The results presented in this paper prove the existence of sufficient safety margins, for the fuel performance limiting parameters during the whole considered period of core operation. (author). 8 refs, 16 figs, 1 tab.

  14. Durability of switchable QR code carriers under hydrolytic and photolytic conditions

    International Nuclear Information System (INIS)

    Ecker, Melanie; Pretsch, Thorsten

    2013-01-01

    Following a guest diffusion approach, the surface of a shape memory poly(ester urethane) (PEU) was either black or blue colored. Bowtie-shaped quick response (QR) code carriers were then obtained from laser engraving and cutting, before thermo-mechanical functionalization (programming) was applied to stabilize the PEU in a thermo-responsive (switchable) state. The stability of the dye within the polymer surface and long-term functionality of the polymer were investigated against UVA and hydrolytic ageing. Spectrophotometric investigations verified UVA ageing-related color shifts from black to yellow-brownish and blue to petrol-greenish whereas hydrolytically aged samples changed from black to greenish and blue to light blue. In the case of UVA ageing, color changes were accompanied by dye decolorization, whereas hydrolytic ageing led to contrast declines due to dye diffusion. The Michelson contrast could be identified as an effective tool to follow ageing-related contrast changes between surface-dyed and laser-ablated (undyed) polymer regions. As soon as the Michelson contrast fell below a crucial value of 0.1 due to ageing, the QR code was no longer decipherable with a scanning device. Remarkably, the PEU information carrier base material could even then be adequately fixed and recovered. Hence, the surface contrast turned out to be the decisive parameter for QR code carrier applicability. (paper)

  15. RELEVANT ISSUES CONCERNING THE RELOCATION OF CIVIL PROCEEDINGS UNDER THE NEW CODE OF CIVIL PROCEDURE (NCPC

    Directory of Open Access Journals (Sweden)

    Andrei Costin GRIMBERG

    2015-07-01

    Full Text Available The change of the new code of civil procedure and obvious the entry of the new provisions at 15th February 2013, has been thought with the hope to accelerate the procedures related to judgement with a noticeable simplification of procedures, all designed with the aim of unifying the case law and to lower the costs generated by lawsuits , costs both borne by the State as well by citizens involved the cases in court . Thus, the implementation of the New Code of Civil Procedure, desired the compliance right to a fair trial within a optimal time and predictable by the court, by judging the trial in a speedy way , avoiding unjustified delays of the pending cases and to the new petitions introduced, by excessive and unjustified delays often. By the noticeable changes that occurred following the entry into force of the new Code of Civil Procedure, it identify and amend the provisions regarding requests for displacement, in terms of the grounds on which it may formulate the petition of displacement and the court competent to hear such an application.

  16. Modelling of WWER-440 fuel rod behaviour under operational conditions with the PIN-micro code

    International Nuclear Information System (INIS)

    Stefanova, S.; Vitkova, M.; Simeonova, V.; Passage, G.; Manolova, M.; Haralampieva, Z.; Scheglov, A.; Proselkov, V.

    1997-01-01

    The report summarizes the first practical experience obtained by fuel rod performance modelling at the Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences. The results of application of the PIN-micro code and the code modification PINB1 for thermomechanical analysis of WWER-440 fuel assemblies (FAs) are presented. The aim of this analysis is to study the fuel rod behaviour of the operating WWER reactors. The performance of two FAs with maximal linear power and varying geometrical and technological parameters is analyzed. On the basis of recent publications on WWER fuel performance modelling at extended burnup, a modified PINB1 version of the standard PIN-micro code is shortly described and applied for the selected FAs. Comparison of the calculated results is performed. The PINB1 version predicts higher fuel temperatures and more adequate FGR rate, accounting for the extended burnup. The results presented in this paper prove the existence of sufficient safety margins, for the fuel performance limiting parameters during the whole considered period of core operation. (author). 8 refs, 16 figs, 1 tab

  17. Modeling chemical gradients in sediments under losing and gaining flow conditions: The GRADIENT code

    Science.gov (United States)

    Boano, Fulvio; De Falco, Natalie; Arnon, Shai

    2018-02-01

    Interfaces between sediments and water bodies often represent biochemical hotspots for nutrient reactions and are characterized by steep concentration gradients of different reactive solutes. Vertical profiles of these concentrations are routinely collected to obtain information on nutrient dynamics, and simple codes have been developed to analyze these profiles and determine the magnitude and distribution of reaction rates within sediments. However, existing publicly available codes do not consider the potential contribution of water flow in the sediments to nutrient transport, and their applications to field sites with significant water-borne nutrient fluxes may lead to large errors in the estimated reaction rates. To fill this gap, the present work presents GRADIENT, a novel algorithm to evaluate distributions of reaction rates from observed concentration profiles. GRADIENT is a Matlab code that extends a previously published framework to include the role of nutrient advection, and provides robust estimates of reaction rates in sediments with significant water flow. This work discusses the theoretical basis of the method and shows its performance by comparing the results to a series of synthetic data and to laboratory experiments. The results clearly show that in systems with losing or gaining fluxes, the inclusion of such fluxes is critical for estimating local and overall reaction rates in sediments.

  18. Durability of switchable QR code carriers under hydrolytic and photolytic conditions

    Science.gov (United States)

    Ecker, Melanie; Pretsch, Thorsten

    2013-09-01

    Following a guest diffusion approach, the surface of a shape memory poly(ester urethane) (PEU) was either black or blue colored. Bowtie-shaped quick response (QR) code carriers were then obtained from laser engraving and cutting, before thermo-mechanical functionalization (programming) was applied to stabilize the PEU in a thermo-responsive (switchable) state. The stability of the dye within the polymer surface and long-term functionality of the polymer were investigated against UVA and hydrolytic ageing. Spectrophotometric investigations verified UVA ageing-related color shifts from black to yellow-brownish and blue to petrol-greenish whereas hydrolytically aged samples changed from black to greenish and blue to light blue. In the case of UVA ageing, color changes were accompanied by dye decolorization, whereas hydrolytic ageing led to contrast declines due to dye diffusion. The Michelson contrast could be identified as an effective tool to follow ageing-related contrast changes between surface-dyed and laser-ablated (undyed) polymer regions. As soon as the Michelson contrast fell below a crucial value of 0.1 due to ageing, the QR code was no longer decipherable with a scanning device. Remarkably, the PEU information carrier base material could even then be adequately fixed and recovered. Hence, the surface contrast turned out to be the decisive parameter for QR code carrier applicability.

  19. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Science.gov (United States)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the

  20. Integrated Numerical Experiments (INEX) and the Free-Electron Laser Physical Process Code (FELPPC)

    International Nuclear Information System (INIS)

    Thode, L.E.; Chan, K.C.D.; Schmitt, M.J.; McKee, J.; Ostic, J.; Elliott, C.J.; McVey, B.D.

    1990-01-01

    The strong coupling of subsystem elements, such as the accelerator, wiggler, and optics, greatly complicates the understanding and design of a free electron laser (FEL), even at the conceptual level. To address the strong coupling character of the FEL the concept of an Integrated Numerical Experiment (INEX) was proposed. Unique features of the INEX approach are consistency and numerical equivalence of experimental diagnostics. The equivalent numerical diagnostics mitigates the major problem of misinterpretation that often occurs when theoretical and experimental data are compared. The INEX approach has been applied to a large number of accelerator and FEL experiments. Overall, the agreement between INEX and the experiments is very good. Despite the success of INEX, the approach is difficult to apply to trade-off and initial design studies because of the significant manpower and computational requirements. On the other hand, INEX provides a base from which realistic accelerator, wiggler, and optics models can be developed. The Free Electron Laser Physical Process Code (FELPPC) includes models developed from INEX, provides coupling between the subsystem models, and incorporates application models relevant to a specific trade-off or design study. In other words, FELPPC solves the complete physical process model using realistic physics and technology constraints. Because FELPPC provides a detailed design, a good estimate for the FEL mass, cost, and size can be made from a piece-part count of the FEL. FELPPC requires significant accelerator and FEL expertise to operate. The code can calculate complex FEL configurations including multiple accelerator and wiggler combinations

  1. From chemical metabolism to life: the origin of the genetic coding process

    Directory of Open Access Journals (Sweden)

    Antoine Danchin

    2017-06-01

    Full Text Available Looking for origins is so much rooted in ideology that most studies reflect opinions that fail to explore the first realistic scenarios. To be sure, trying to understand the origins of life should be based on what we know of current chemistry in the solar system and beyond. There, amino acids and very small compounds such as carbon dioxide, dihydrogen or dinitrogen and their immediate derivatives are ubiquitous. Surface-based chemical metabolism using these basic chemicals is the most likely beginning in which amino acids, coenzymes and phosphate-based small carbon molecules were built up. Nucleotides, and of course RNAs, must have come to being much later. As a consequence, the key question to account for life is to understand how chemical metabolism that began with amino acids progressively shaped into a coding process involving RNAs. Here I explore the role of building up complementarity rules as the first information-based process that allowed for the genetic code to emerge, after RNAs were substituted to surfaces to carry over the basic metabolic pathways that drive the pursuit of life.

  2. Development of hydraulic analysis code for optimizing thermo-chemical is process reactors

    International Nuclear Information System (INIS)

    Terada, Atsuhiko; Hino, Ryutaro; Hirayama, Toshio; Nakajima, Norihiro; Sugiyama, Hitoshi

    2007-01-01

    The Japan Atomic Energy Agency has been conducting study on thermochemical IS process for water splitting hydrogen production. Based on the test results and know-how obtained through the bench-scale test, a pilot test plant, which has a hydrogen production performance of 30 Nm 3 /h, is being designed conceptually as the next step of the IS process development. In design of the IS pilot plant, it is important to make chemical reactors compact with high performance from the viewpoint of plant cost reduction. A new hydraulic analytical code has been developed for optimizing mixing performance of multi-phase flow involving chemical reactions especially in the Bunsen reactor. Complex flow pattern with gas-liquid chemical interaction involving flow instability will be characterized in the Bunsen reactor. Preliminary analytical results obtained with above mentioned code, especially flow patterns induced by swirling flow agreed well with that measured by water experiments, which showed vortex breakdown pattern in a simplified Bunsen reactor. (author)

  3. TRIO-EF a general thermal hydraulics computer code applied to the Avlis process

    International Nuclear Information System (INIS)

    Magnaud, J.P.; Claveau, M.; Coulon, N.; Yala, P.; Guilbaud, D.; Mejane, A.

    1993-01-01

    TRIO(EF is a general purpose Fluid Mechanics 3D Finite Element Code. The system capabilities cover areas such as steady state or transient, laminar or turbulent, isothermal or temperature dependent fluid flows; it is applicable to the study of coupled thermo-fluid problems involving heat conduction and possibly radiative heat transfer. It has been used to study the thermal behaviour of the AVLIS process separation module. In this process, a linear electron beam impinges the free surface of a uranium ingot, generating a two dimensional curtain emission of vapour from a water-cooled crucible. The energy transferred to the metal causes its partial melting, forming a pool where strong convective motion increases heat transfer towards the crucible. In the upper part of the Separation Module, the internal structures are devoted to two main functions: vapor containment and reflux, irradiation and physical separation. They are subjected to very high temperature levels and heat transfer occurs mainly by radiation. Moreover, special attention has to be paid to electron backscattering. These two major points have been simulated numerically with TRIO-EF and the paper presents and comments the results of such a computation, for each of them. After a brief overview of the computer code, two examples of the TRIO-EF capabilities are given: a crucible thermal hydraulics model, a thermal analysis of the internal structures

  4. Enhancing Image Processing Performance for PCID in a Heterogeneous Network of Multi-code Processors

    Science.gov (United States)

    Linderman, R.; Spetka, S.; Fitzgerald, D.; Emeny, S.

    The Physically-Constrained Iterative Deconvolution (PCID) image deblurring code is being ported to heterogeneous networks of multi-core systems, including Intel Xeons and IBM Cell Broadband Engines. This paper reports results from experiments using the JAWS supercomputer at MHPCC (60 TFLOPS of dual-dual Xeon nodes linked with Infiniband) and the Cell Cluster at AFRL in Rome, NY. The Cell Cluster has 52 TFLOPS of Playstation 3 (PS3) nodes with IBM Cell Broadband Engine multi-cores and 15 dual-quad Xeon head nodes. The interconnect fabric includes Infiniband, 10 Gigabit Ethernet and 1 Gigabit Ethernet to each of the 336 PS3s. The results compare approaches to parallelizing FFT executions across the Xeons and the Cell's Synergistic Processing Elements (SPEs) for frame-level image processing. The experiments included Intel's Performance Primitives and Math Kernel Library, FFTW3.2, and Carnegie Mellon's SPIRAL. Optimization of FFTs in the PCID code led to a decrease in relative processing time for FFTs. Profiling PCID version 6.2, about one year ago, showed the 13 functions that accounted for the highest percentage of processing were all FFT processing functions. They accounted for over 88% of processing time in one run on Xeons. FFT optimizations led to improvement in the current PCID version 8.0. A recent profile showed that only two of the 19 functions with the highest processing time were FFT processing functions. Timing measurements showed that FFT processing for PCID version 8.0 has been reduced to less than 19% of overall processing time. We are working toward a goal of scaling to 200-400 cores per job (1-2 imagery frames/core). Running a pair of cores on each set of frames reduces latency by implementing parallel FFT processing. Our current results show scaling well out to 100 pairs of cores. These results support the next higher level of parallelism in PCID, where groups of several hundred frames each producing one resolved image are sent to cliques of several

  5. Samovar: a thermomechanical code for modeling of geodynamic processes in the lithosphere-application to basin evolution

    DEFF Research Database (Denmark)

    Elesin, Y; Gerya, T; Artemieva, Irina

    2010-01-01

    We present a new 2D finite difference code, Samovar, for high-resolution numerical modeling of complex geodynamic processes. Examples are collision of lithospheric plates (including mountain building and subduction) and lithosphere extension (including formation of sedimentary basins, regions...... of extended crust, and rift zones). The code models deformation of the lithosphere with viscoelastoplastic rheology, including erosion/sedimentation processes and formation of shear zones in areas of high stresses. It also models steady-state and transient conductive and advective thermal processes including...... partial melting and magma transport in the lithosphere. The thermal and mechanical parts of the code are tested for a series of physical problems with analytical solutions. We apply the code to geodynamic modeling by examining numerically the processes of lithosphere extension and basin formation...

  6. 28 CFR 522.12 - Relationship between existing criminal sentences imposed under the U.S. or D.C. Code and new...

    Science.gov (United States)

    2010-07-01

    ... sentences imposed under the U.S. or D.C. Code and new civil contempt commitment orders. 522.12 Section 522..., AND TRANSFER ADMISSION TO INSTITUTION Civil Contempt of Court Commitments § 522.12 Relationship between existing criminal sentences imposed under the U.S. or D.C. Code and new civil contempt commitment...

  7. Code of Practice on Radiation Protection in the Mining and Processing of Mineral Sands (1982) (Western Australia)

    International Nuclear Information System (INIS)

    1982-01-01

    This Code establishes radiation safety practices for the mineral sands industry in Western Australia. The Code prescribes, not only for operators and managers of mines and processing plants but for their employees as well, certain duties designed to ensure that radiation exposure is kept as low as reasonably practicable. The Code further provides for the management of wastes, again with a view to keeping contaminant concentrations and dose rates within specified levels. Finally, provision is made for the rehabilitation of those sites in which mining or processing operations have ceased by restoring the areas to designated average radiation levels. (NEA) [fr

  8. Parallel Computing Characteristics of CUPID code under MPI and Hybrid environment

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Ryong; Yoon, Han Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeon, Byoung Jin; Choi, Hyoung Gwon [Seoul National Univ. of Science and Technology, Seoul (Korea, Republic of)

    2014-05-15

    In this paper, a characteristic of parallel algorithm is presented for solving an elliptic type equation of CUPID via domain decomposition method using the MPI and the parallel performance is estimated in terms of a scalability which shows the speedup ratio. In addition, the time-consuming pattern of major subroutines is studied. Two different grid systems are taken into account: 40,000 meshes for coarse system and 320,000 meshes for fine system. Since the matrix of the CUPID code differs according to whether the flow is single-phase or two-phase, the effect of matrix shape is evaluated. Finally, the effect of the preconditioner for matrix solver is also investigated. Finally, the hybrid (OpenMP+MPI) parallel algorithm is introduced and discussed in detail for solving pressure solver. Component-scale thermal-hydraulics code, CUPID has been developed for two-phase flow analysis, which adopts a three-dimensional, transient, three-field model, and parallelized to fulfill a recent demand for long-transient and highly resolved multi-phase flow behavior. In this study, the parallel performance of the CUPID code was investigated in terms of scalability. The CUPID code was parallelized with domain decomposition method. The MPI library was adopted to communicate the information at the neighboring domain. For managing the sparse matrix effectively, the CSR storage format is used. To take into account the characteristics of the pressure matrix which turns to be asymmetric for two-phase flow, both single-phase and two-phase calculations were run. In addition, the effect of the matrix size and preconditioning was also investigated. The fine mesh calculation shows better scalability than the coarse mesh because the number of coarse mesh does not need to decompose the computational domain excessively. The fine mesh can be present good scalability when dividing geometry with considering the ratio between computation and communication time. For a given mesh, single-phase flow

  9. Narrative and emotion process in psychotherapy: an empirical test of the Narrative-Emotion Process Coding System (NEPCS).

    Science.gov (United States)

    Boritz, Tali Z; Bryntwick, Emily; Angus, Lynne; Greenberg, Leslie S; Constantino, Michael J

    2014-01-01

    While the individual contributions of narrative and emotion processes to psychotherapy outcome have been the focus of recent interest in psychotherapy research literature, the empirical analysis of narrative and emotion integration has rarely been addressed. The Narrative-Emotion Processes Coding System (NEPCS) was developed to provide researchers with a systematic method for identifying specific narrative and emotion process markers, for application to therapy session videos. The present study examined the relationship between NEPCS-derived problem markers (same old storytelling, empty storytelling, unstoried emotion, abstract storytelling) and change markers (competing plotlines storytelling, inchoate storytelling, unexpected outcome storytelling, and discovery storytelling), and treatment outcome (recovered versus unchanged at therapy termination) and stage of therapy (early, middle, late) in brief emotion-focused (EFT), client-centred (CCT), and cognitive (CT) therapies for depression. Hierarchical linear modelling analyses demonstrated a significant Outcome effect for inchoate storytelling (p = .037) and discovery storytelling (p = .002), a Stage × Outcome effect for abstract storytelling (p = .05), and a Stage × Outcome × Treatment effect for competing plotlines storytelling (p = .001). There was also a significant Stage × Outcome effect for NEPCS problem markers (p = .007) and change markers (p = .03). The results provide preliminary support for the importance of assessing the contribution of narrative-emotion processes to efficacious treatment outcomes in EFT, CCT, and CT treatments of depression.

  10. [Scientific connotation of processing Bombyx Batryticatus under high temperature].

    Science.gov (United States)

    Ma, Li; Wang, Xuan; Ma, Lin; Wang, Man-yuan; Qiu, Feng

    2015-12-01

    The aim of this study was to elucidate the scientific connotation of Bombyx Batryticatus processing with wheat bran under high temperature. The contents of soluble protein extracted from Bombyx Batryticatus and its processed products and the limited content of AFT in Bombyx Batryticatus and the processed one were compared. The concentration of protein was measured with the Bradford methods and the difference of protein between Bombyx Batryticatus and its processed products was compared by SDS-PAGE analysis. Aflatoxin B1, B2, G1, and G2 were determined by reversed-phase HPLC. The results showed that the soluble protein content of Bombyx Batryticatus and its processed products were (47.065 +/- 0.249), (29.756 +/- 1.961) mg x g(-1), correspondingly. Analysis of protein gel electrophoresis showed that there were no significant differences between the crude and processed one in protein varieties. 6 bands were detected: 31.90, 26.80, 18.71, 15.00, 10.18, 8.929 kDa. Below 10 kDa, the color of bands of the processed one was deeper than the crude one, which demonstrate that macromolecular protein was degradated into micromolecule. The content of AFG1, AFB1, AFG2, AFB2 were 0.382, 0.207, 0.223, 0.073 g x kg(-1), not exceeded 5 microg x kg(-1) while the processed one was not detected. Through processing with wheat bran under high temperature, the content of soluble protein in Bombyx Batryticatus decreased, the processing purpose for alleviating drug property was achieved. Meanwhile, the limited content of aflatoxins were reduced or cleared by processing procedure or absorbed by processing auxillary material, adding the safety of the traditional Chinese Medicine. In conclusion, as a traditional processing method, bran frying Bombyx Batryticatus was scientific and reasonable.

  11. Optimization of Wireless Transceivers under Processing Energy Constraints

    Science.gov (United States)

    Wang, Gaojian; Ascheid, Gerd; Wang, Yanlu; Hanay, Oner; Negra, Renato; Herrmann, Matthias; Wehn, Norbert

    2017-09-01

    Focus of the article is on achieving maximum data rates under a processing energy constraint. For a given amount of processing energy per information bit, the overall power consumption increases with the data rate. When targeting data rates beyond 100 Gb/s, the system's overall power consumption soon exceeds the power which can be dissipated without forced cooling. To achieve a maximum data rate under this power constraint, the processing energy per information bit must be minimized. Therefore, in this article, suitable processing efficient transmission schemes together with energy efficient architectures and their implementations are investigated in a true cross-layer approach. Target use cases are short range wireless transmitters working at carrier frequencies around 60 GHz and bandwidths between 1 GHz and 10 GHz.

  12. Plasma Separation Process: Betacell (BCELL) code: User's manual. [Bipolar barrier junction

    Energy Technology Data Exchange (ETDEWEB)

    Taherzadeh, M.

    1987-11-13

    The emergence of clearly defined applications for (small or large) amounts of long-life and reliable power sources has given the design and production of betavoltaic systems a new life. Moreover, because of the availability of the plasma separation program, (PSP) at TRW, it is now possible to separate the most desirable radioisotopes for betacell power generating devices. A computer code, named BCELL, has been developed to model the betavoltaic concept by utilizing the available up-to-date source/cell parameters. In this program, attempts have been made to determine the betacell energy device maximum efficiency, degradation due to the emitting source radiation and source/cell lifetime power reduction processes. Additionally, comparison is made between the Schottky and PN junction devices for betacell battery design purposes. Certain computer code runs have been made to determine the JV distribution function and the upper limit of the betacell generated power for specified energy sources. A Ni beta emitting radioisotope was used for the energy source and certain semiconductors were used for the converter subsystem of the betacell system. Some results for a Promethium source are also given here for comparison. 16 refs.

  13. Models of neural networks temporal aspects of coding and information processing in biological systems

    CERN Document Server

    Hemmen, J; Schulten, Klaus

    1994-01-01

    Since the appearance of Vol. 1 of Models of Neural Networks in 1991, the theory of neural nets has focused on two paradigms: information coding through coherent firing of the neurons and functional feedback. Information coding through coherent neuronal firing exploits time as a cardinal degree of freedom. This capacity of a neural network rests on the fact that the neuronal action potential is a short, say 1 ms, spike, localized in space and time. Spatial as well as temporal correlations of activity may represent different states of a network. In particular, temporal correlations of activity may express that neurons process the same "object" of, for example, a visual scene by spiking at the very same time. The traditional description of a neural network through a firing rate, the famous S-shaped curve, presupposes a wide time window of, say, at least 100 ms. It thus fails to exploit the capacity to "bind" sets of coherently firing neurons for the purpose of both scene segmentation and figure-ground segregatio...

  14. Enhancement of Pre-and Post-Processing Capability of the CUPID code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jongtae; Park, Ik Kyu; Yoon, Hanyoung [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    To simulate heat transfer and fluid flow in a field with a complicated geometry, an unstructured mesh is popularly used. Most commercial CFD (Computational Fluid Dynamics) solvers are based on an unstructured mesh technology. An advantage of using unstructured meshes for a field simulation is reduced man-hours by automatic mesh generation compared to a traditional structured mesh generation, which requires a huge amount of man-hours to discretized a complex geometry. Initially, unstructured meshes that can be generated automatically are limited to regular cell elements such as tetrahedron, pyramid, prism, or hexahedron. The multi-dimensional multi-phase flow solver, CUPID, has been developed in the context of an unstructured mesh finite volume method (FVM). Its numerical formulation and programming structure is independent of the number of faces surrounding the computational cells. Thus, it can be easily extended into polyhedral unstructured meshes. In this study, new tools for enhancing the pre- and post-processing capabilities of CUPID are proposed. They are based on an open-source CFD tool box OpenFOAM. A goal of this study is an extension of the applicability of the CUPID code by improving the mesh and solution treatment of the code.

  15. Nodalization qualification process of the PSBVVER facility for the Cathare2 thermal-hydraulic code

    International Nuclear Information System (INIS)

    Del Nevo, A.; Araneo, D.; D'Auria, F.; Galassi, G.

    2004-01-01

    The present document deals with the nodalization qualification process of the PSB-VVER test facility for Cathare2 code. PSB-VVER facility is a 1/300 volume scale model of a VVER-1000, reactor installed at Electrogorsk Research and Engineering Centre in 1998. The version V1.5b of the Cathare2 code has been used. In order to evaluate the nodalization performance, the qualifying procedure set up at the DIMNP of Pisa University (UNIPI) has been applied that foresees two qualification levels: a 'steady state' level and an 'on transient' level. After the steady state behavior check of the nodalization, it has been preformed the on transient qualification the PSB-VVER test 2. It is a 11% equivalent break in Upper Plenum with the actuation of one high pressure injection system, connected to the hot leg of the loop 4, and 4 passive systems (ECCS hydro-accumulators), connected to the outlet plenum and to the inlet chamber of the downcomer. The low-pressure injection system is not available in the test. The goal of this paper is to demonstrate that the first step of the nodalization qualification adopted for the PSB test analyses is achieved and the PSB facility input deck is available and ready to use. The quantitative accuracy of the performed calculation has been evaluated by using the FFT-BM tool developed at the University of Pisa.(author)

  16. Development of thermal-hydraulic safety codes for HTGRs with gas-turbine and hydrogen process cycles

    International Nuclear Information System (INIS)

    No, Hee Cheon; Yoon, Ho Joon; Lee, Byung Jin; Kim, Yong Soo; Jin, Hyeng Gon; Kim, Ji Hwan; Kim, Hyeun Min; Lim, Hong Sik

    2008-01-01

    We present three nuclear/hydrogen-related R and D activities being performed at KAIST: air-ingressed LOCA analysis code development, gas turbine analysis tool development, and hydrogen-production system analysis model development. The ICE numerical technique widely used for the safety analysis of water-reactors is successfully implemented into GAMMA in which we solve the basic equations for continuity, momentum conservation, energy conservation of the gas mixture, and mass conservation of 6 species (He, N2, O2, CO, CO2, and H2O). GAMMA has been extensively validated using data from 14 test facilities. We developed SANA code to predict the characteristics of HTGR helium turbines based on the throughflow calculation with a Newton-Raphson method that overcomes the weakness of the conventional method based on the successive iteration scheme. It is found out that the current method reaches stable and quick convergence even under the off-normal condition with the same degree of accuracy. The GAMMA-SANA coupled code was assessed by comparing its results with the steady-state of the GTHTR300, and the load reduction transient was simulated for the 100% to 70% power operation. The calculation results confirm that two-dimensional throughflow modeling can be successfully used to describe the gas turbine behavior. The dynamic equations for the distillation column of the HI process in the I-S cycle are described with 4 material components involved in the HI process: H2O, HI, I2, and H2. For the VLE prediction in the HI process we improved the Neumann model based on the NRTL (Non-Random Two-Liquid) model. Relative to the experimental data, the improved Neumann model shows deviations of 8.6% in maximum and 2.5% in average for the total pressure, and 9.5% in maximum for the liquid-liquid separation composition. Through a parametric analysis using the published experimental data related to the Bunsen reaction and liquid-liquid separation, an optimized operating condition for the

  17. Probabilistic Design in a Sheet Metal Stamping Process under Failure Analysis

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao, Jian; Chen, Wei; Xia, Z. Cedric

    2005-01-01

    Sheet metal stamping processes have been widely implemented in many industries due to its repeatability and productivity. In general, the simulations for a sheet metal forming process involve nonlinearity, complex material behavior and tool-material interaction. Instabilities in terms of tearing and wrinkling are major concerns in many sheet metal stamping processes. In this work, a sheet metal stamping process of a mild steel for a wheelhouse used in automobile industry is studied by using an explicit nonlinear finite element code and incorporating failure analysis (tearing and wrinkling) and design under uncertainty. Margins of tearing and wrinkling are quantitatively defined via stress-based criteria for system-level design. The forming process utilizes drawbeads instead of using the blank holder force to restrain the blank. The main parameters of interest in this work are friction conditions, drawbead configurations, sheet metal properties, and numerical errors. A robust design model is created to conduct a probabilistic design, which is made possible for this complex engineering process via an efficient uncertainty propagation technique. The method called the weighted three-point-based method estimates the statistical characteristics (mean and variance) of the responses of interest (margins of failures), and provide a systematic approach in designing a sheet metal forming process under the framework of design under uncertainty

  18. The levels of processing effect under nitrogen narcosis.

    Science.gov (United States)

    Kneller, Wendy; Hobbs, Malcolm

    2013-01-01

    Previous research has consistently demonstrated that inert gas (nitrogen) narcosis affects free recall but not recognition memory in the depth range of 30 to 50 meters of sea water (msw), possibly as a result of narcosis preventing processing when learned material is encoded. The aim of the current research was to test this hypothesis by applying a levels of processing approach to the measurement of free recall under narcosis. Experiment 1 investigated the effect of depth (0-2 msw vs. 37-39 msw) and level of processing (shallow vs. deep) on free recall memory performance in 67 divers. When age was included as a covariate, recall was significantly worse in deep water (i.e., under narcosis), compared to shallow water, and was significantly higher in the deep processing compared to shallow processing conditions in both depth conditions. Experiment 2 demonstrated that this effect was not simply due to the different underwater environments used for the depth conditions in Experiment 1. It was concluded memory performance can be altered by processing under narcosis and supports the contention that narcosis affects the encoding stage of memory as opposed to self-guided search (retrieval).

  19. Finite element simulation of ironing process under warm conditions

    Directory of Open Access Journals (Sweden)

    Swadesh Kumar Singh

    2014-01-01

    Full Text Available Metal forming is one of the most important steps in manufacturing of a large variety of products. Ironing in deep drawing is done by adjusting the clearance between the punch and the die and allow the material flow over the punch. In the present investigation effect of extent of ironing behavior on the characteristics of the product like thickness distribution with respect to temperature was studied. With the help of finite element simulation using explicit finite element code LS-DYNA the stress in the drawn cup were predicted in the drawn cup. To increase the accuracy in the simulation process, numbers of integration points were increased in the thickness direction and it was found that there is very close prediction of finite element results to that of experimental ones.

  20. DIONISIO 2.0: new version of the code for simulating the behavior of a power fuel rod under irradiation

    International Nuclear Information System (INIS)

    Soba, A; Denis, A; Lemes, M; Gonzalez, M E

    2012-01-01

    During the latest ten years the Codes and Models Section of the Nuclear Fuel Cycle Department has been developing the DIONISIO code, which simulates most of the main phenomena that take place within a fuel rod during the normal operation of a nuclear reactor: temperature distribution, thermal expansion, elastic and plastic strain, creep, irradiation growth, pellet-cladding mechanical interaction, fission gas release, swelling and densification. Axial symmetry is assumed and cylindrical finite elements are used to discretized the domain. The code has a modular structure and contains more than 40 interconnected models. A group of subroutines, designed to extend the application range of the fuel performance code DIONISIO to high burn up, has recently been included in the code. The new calculation tools, which are tuned for UO 2 fuels in LWR conditions, predict the radial distribution of power density, burnup and concentration of diverse nuclides within the pellet. New models of porosity and fission gas release in the rim, as well as the influence of the microstructure of this zone on the thermal conductivity of the pellet, are presently under development. A considerable computational challenge was the inclusion of the option of simulating the whole bar, by dividing it in a number of axial segments, at the user's choice, and solving in each segment the complete problem. All the general rod parameters (pressure, fission gas release, volume, etc.) are evaluated at the end of every time step. This modification allows taking into account the axial variation of the linear power and, consequently, evaluating the dependence of all the significant rod parameters with that coordinate. DIONISIO was elected for participating in the FUMEX III project of codes intercomparison, organized by IAEA, from 2008 to 2011. The results of the simulations performed within this project were compared with more than 30 experiments that involve more than 150 irradiated rods. The high number

  1. The effects of divided attention on encoding processes under incidental and intentional learning instructions: underlying mechanisms?

    Science.gov (United States)

    Naveh-Benjamin, Moshe; Guez, Jonathan; Hara, Yoko; Brubaker, Matthew S; Lowenschuss-Erlich, Iris

    2014-01-01

    Divided attention (DA) at encoding has been shown to significantly disrupt later memory for the studied information. However, what type of processing gets disrupted during DA remains unresolved. In this study, we assessed the degree to which strategic effortful processes are affected under DA by comparing the effects of DA at encoding under intentional and pure incidental learning instructions. In three experiments, participants studied list of words or word pairs under either full or divided attention. Results of three experiments, which used different methodologies, converged to show that the effects of DA at encoding reduce memory performance to the same degree under incidental and intentional learning. Secondary task performance indicated that encoding under intentional learning instructions was more effortful than under incidental learning instructions. In addition, the results indicated enhanced attention to the initial appearance of the words under both types of learning instructions. Results are interpreted to imply that other processes, rather than only strategic effortful ones, might be affected by DA at encoding.

  2. Eye Movement Analysis of Information Processing under Different Testing Conditions.

    Science.gov (United States)

    Dillon, Ronna F.

    1985-01-01

    Undergraduates were given complex figural analogies items, and eye movements were observed under three types of feedback: (1) elaborate feedback; (2) subjects verbalized their thinking and application of rules; and (3) no feedback. Both feedback conditions enhanced the rule-governed information processing during inductive reasoning. (Author/GDC)

  3. Dynamic Processes in Nanostructured Crystals Under Ion Irradiation

    Science.gov (United States)

    Uglov, V. V.; Kvasov, N. T.; Shimanski, V. I.; Safronov, I. V.; Komarov, N. D.

    2018-02-01

    The paper presents detailed investigations of dynamic processes occurring in nanostructured Si(Fe) material under the radiation exposure, namely: heating, thermoelastic stress generation, elastic disturbances of the surrounding medium similar to weak shock waves, and dislocation generation. The performance calculations are proposed for elastic properties of the nanostructured material with a glance to size effects in nanoparticles.

  4. Coded excitation and sub-band processing for blood velocity estmation in medical ultrasound

    DEFF Research Database (Denmark)

    Gran, Fredrik; Udesen, Jesper; Jensen, Jørgen Arendt

    2007-01-01

    This paper investigates the use of broadband coded excitation and subband processing for blood velocity estimation in medical ultrasound. In conventional blood velocity estimation a long (narrow-band) pulse is emitted and the blood velocity is estimated using an auto-correlation based approach....... However, the axial resolution of the narrow-band pulse is too poor for brightness-mode (B-mode) imaging. Therefore, a separate transmission sequence is used for updating the B-mode image, which lowers the overall frame-rate of the system. By using broad-band excitation signals, the backscattered received...... signal can be divided into a number of narrow frequency bands. The blood velocity can be estimated in each of the bands and the velocity estimates can be averaged to form an improved estimate. Furthermore, since the excitation signal is broadband, no secondary B-mode sequence is required, and the frame...

  5. Patient's right to information under the New Zealand Code of Rights.

    Science.gov (United States)

    Mullen, Kyla

    2015-09-01

    The Code of Health and Disability Services Consumers' Rights includes right 6: the "Right to be Fully Informed". Analysis of the Health and Disability Commissioners' opinions between 2008 and 2013 that have discussed right 6 shows that the duties on providers have increased in a number of areas: the need to inform of risks, including provider-inherent risks; open disclosure of adverse events; ongoing need to inform consumers throughout the therapeutic relationship; information of all available options; and provision of sufficient time between disclosure of information and obtaining informed consent for provision of health services. Following a breach opinion, the Human Rights Review Tribunal and the Health Practitioners Competency Tribunal, on occasion, have the opportunity to consider the case but their role in law development is limited compared with that of the Commissioner. The limitations of law development in this manner are discussed.

  6. Effect of interpolation error in pre-processing codes on calculations of self-shielding factors and their temperature derivatives

    International Nuclear Information System (INIS)

    Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullan, D.E.

    1986-01-01

    We investigate the effect of interpolation error in the pre-processing codes LINEAR, RECENT and SIGMA1 on calculations of self-shielding factors and their temperature derivatives. We consider the 2.0347 to 3.3546 keV energy region for 238 U capture, which is the NEACRP benchmark exercise on unresolved parameters. The calculated values of temperature derivatives of self-shielding factors are significantly affected by interpolation error. The sources of problems in both evaluated data and codes are identified and eliminated in the 1985 version of these codes. This paper helps to (1) inform code users to use only 1985 versions of LINEAR, RECENT, and SIGMA1 and (2) inform designers of other code systems where they may have problems and what to do to eliminate their problems. (author)

  7. Effect of interpolation error in pre-processing codes on calculations of self-shielding factors and their temperature derivatives

    International Nuclear Information System (INIS)

    Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullen, D.E.

    1985-01-01

    The authors investigate the effect of interpolation error in the pre-processing codes LINEAR, RECENT and SIGMA1 on calculations of self-shielding factors and their temperature derivatives. They consider the 2.0347 to 3.3546 keV energy region for /sup 238/U capture, which is the NEACRP benchmark exercise on unresolved parameters. The calculated values of temperature derivatives of self-shielding factors are significantly affected by interpolation error. The sources of problems in both evaluated data and codes are identified and eliminated in the 1985 version of these codes. This paper helps to (1) inform code users to use only 1985 versions of LINEAR, RECENT, and SIGMA1 and (2) inform designers of other code systems where they may have problems and what to do to eliminate their problems

  8. Development and verification of coupled fluid-structural dynamic codes for stress analysis of reactor vessel internals under blowdown loading

    International Nuclear Information System (INIS)

    Krieg, R.; Schlechtendahl, E.G.

    1977-01-01

    YAQUIR has been applied to large PWR blowdown problems and compared with LECK results. The structural model of CYLDY2 and the fluid model of YAQUIR have been coupled in the code STRUYA. First tests with the fluid dynamic systems code FLUST have been successful. The incompressible fluid version of the 3D coupled code FLUX for HDR-geometry was checked against some analytical test cases and was used for evaluation of the eigenfrequencies of the coupled system. Several test cases were run with the two phase flow code SOLA-DF with satisfactory results. Remarkable agreement was found between YAQUIR results and experimental data obtained from shallow water analogy experiments. A test for investigation of nonequilibrium twophase flow dynamics has been specified in some detail. The test is to be performed early 1978 in the water loop of the IRB. Good agreement was found between the natural frequency predictions for the core barrel obtained from CYLDY2 and STRUDL/DYNAL. Work started on improvement of the beam mode treatment in CYLDY2. The name of this modified version will be CYLDY3. The fluiddynamic code SING1, based on an advanced singularity method and applicable to a broad class of highly transient, incompressible 3D-problems with negligible viscosity has been developed and tested. It will be used in connection with the planned laboratory experiments in order to investigate the effect of the core structure on the blowdown process. Coupling of SING1 with structural dynamics is on the way. (orig./RW) [de

  9. Classification and coding of commercial fishing injuries by work processes: an experience in the Danish fresh market fishing industry

    DEFF Research Database (Denmark)

    Jensen, Olaf Chresten; Stage, Søren; Noer, Preben

    2005-01-01

    BACKGROUND: Work-related injuries in commercial fishing are of concern internationally. To better identify the causes of injury, this study coded occupational injuries by working processes in commercial fishing for fresh market fish. METHODS: A classification system of the work processes was deve......BACKGROUND: Work-related injuries in commercial fishing are of concern internationally. To better identify the causes of injury, this study coded occupational injuries by working processes in commercial fishing for fresh market fish. METHODS: A classification system of the work processes...... to working with the gear and nets vary greatly in the different fishing methods. Coding of the injuries to the specific working processes allows for targeted prevention efforts....

  10. Development of RETRAN-03/MOV code for thermal-hydraulic analysis of nuclear reactor under moving conditions

    International Nuclear Information System (INIS)

    Kim, Hak Jae; Park, Goon Cherl

    1996-01-01

    Nuclear ship reactors have several; features different from land-based PWR's. Especially, effects of ship motions on reactor thermal-hydraulics and good load following capability for abrupt load changes are essential characteristics of nuclear ship reactors. This study modified the RETRAN-03 to analyze the thermal-hydraulic transients under three-dimensional ship motions, named RETRAN-03/MOV in order to apply to future marine reactors. First Japanese nuclear ship MUTSU reactor have been analyzed under various ship motions to verify this code. Calculations have been performed under rolling,heaving and stationary inclination conditions during normal operation. Also, the natural circulation has been analyzed, which can provide the decay heat removed to ensure the passive safety of marine reactors. As results, typical thermal-hydraulic characteristics of marine reactors such as flow rate oscillations and S/G water level oscillations have been successfully simulated at various conditions. 7 refs., 11 figs. (author)

  11. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  12. Concentration processes under tubesheet sludge piles in nuclear steam generators

    International Nuclear Information System (INIS)

    Gonzalez, F.; Spekkens, P.

    1987-01-01

    The process by which bulk water solutes are concentrated under tubesheet sludge piles in nuclear steam generators was investigated in the laboratory under simulated CANDU operating conditions. Concentration rates were found to depend on the tube heat flux and pile depth, although beyond a critical depth the concentration efficiency decreased. This efficiency could be expressed by a concentration coefficient, and was found to depend also on the sludge pile porosity. Solute concentration profiles in the sludge pile suggested that the concentration mechanism in a high-porosity/permeability pile is characterized by boiling mainly near or at the tube surface, while in low-porosity piles, the change of phase may also become important in the body of the sludge pile. In all cases, the full depth of the pile was active to some extent in the concentration process. As long as the heat transfer under the pile was continued, the solute remained under the pile and slowly migrated toward the bottom. When the heat transfer was stopped, the solute diffused back into the bulk solution at a rate slower than that of the concentration process

  13. Dress codes and appearance policies: challenges under federal legislation, part 2: title VII of the civil rights act and gender.

    Science.gov (United States)

    Mitchell, Michael S; Koen, Clifford M; Darden, Stephen M

    2014-01-01

    As more and more individuals express themselves with tattoos and body piercings and push the envelope on what is deemed appropriate in the workplace, employers have an increased need for creation and enforcement of reasonable dress codes and appearance policies. As with any employment policy or practice, an appearance policy must be implemented and enforced without regard to an individual's race, color, gender, national origin, religion, disability, age, or other protected status. A policy governing dress and appearance based on the business needs of an employer that is applied fairly and consistently and does not have a disproportionate effect on any protected class will generally be upheld if challenged in court. By examining some of the more common legal challenges to dress codes and how courts have resolved the disputes, health care managers can avoid many potential problems. This article, the second part of a 3-part examination of dress codes and appearance policies, focuses on the issue of gender under the Civil Rights Act of 1964. Pertinent court cases that provide guidance for employers are addressed.

  14. Analysis of error floor of LDPC codes under LP decoding over the BSC

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Laboratory; Chilappagari, Shashi [UNIV OF AZ; Vasic, Bane [UNIV OF AZ; Stepanov, Mikhail [UNIV OF AZ

    2009-01-01

    We consider linear programming (LP) decoding of a fixed low-density parity-check (LDPC) code over the binary symmetric channel (BSC). The LP decoder fails when it outputs a pseudo-codeword which is not a codeword. We propose an efficient algorithm termed the instanton search algorithm (ISA) which, given a random input, generates a set of flips called the BSC-instanton and prove that: (a) the LP decoder fails for any set of flips with support vector including an instanton; (b) for any input, the algorithm outputs an instanton in the number of steps upper-bounded by twice the number of flips in the input. We obtain the number of unique instantons of different sizes by running the ISA sufficient number of times. We then use the instanton statistics to predict the performance of the LP decoding over the BSC in the error floor region. We also propose an efficient semi-analytical method to predict the performance of LP decoding over a large range of transition probabilities of the BSC.

  15. DIONISIO 2.0: New version of the code for simulating a whole nuclear fuel rod under extended irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Soba, Alejandro, E-mail: soba@cnea.gov.ar; Denis, Alicia

    2015-10-15

    Highlights: • A new version of the DIONISIO code is developed. • DIONISIO is devoted to simulating the behavior of a nuclear fuel rod in operation. • The formerly two-dimensional simulation of a pellet-cladding segment is now extended to the whole rod length. • An acceptable and more realistic agreement with experimental data is obtained. • The prediction range of our code is extended up to average burnup of 60 MWd/kgU. - Abstract: The version 2.0 of the DIONISIO code, that incorporates diverse new aspects, has been recently developed. One of them is referred to the code architecture that allows taking into account the axial variation of the conditions external to the rod. With this purpose, the rod is divided into a number of axial segments. In each one the program considers the system formed by a pellet and the corresponding cladding portion and solves the numerous phenomena that take place under the local conditions of linear power and coolant temperature, which are given as input parameters. To do this a bi-dimensional domain in the r–z plane is considered where cylindrical symmetry and also symmetry with respect to the pellet mid-plane are assumed. The results obtained for this representative system are assumed valid for the complete segment. The program thus produces in each rod section the values of the temperature, stress, strain, among others as outputs, as functions of the local coordinates r and z. Then, the general rod parameters (internal rod pressure, amount of fission gas released, pellet stack elongation, etc.) are evaluated. Moreover, new calculation tools designed to extend the application range of the code to high burnup, which were reported elsewhere, have also been incorporated to DIONISIO 2.0 in recent times. With these improvements, the code results are compared with some 33 experiments compiled in the IFPE data base, that cover more than 380 fuel rods irradiated up to average burnup levels of 40–60 MWd/kgU. The results of these

  16. DIONISIO 2.0: New version of the code for simulating a whole nuclear fuel rod under extended irradiation

    International Nuclear Information System (INIS)

    Soba, Alejandro; Denis, Alicia

    2015-01-01

    Highlights: • A new version of the DIONISIO code is developed. • DIONISIO is devoted to simulating the behavior of a nuclear fuel rod in operation. • The formerly two-dimensional simulation of a pellet-cladding segment is now extended to the whole rod length. • An acceptable and more realistic agreement with experimental data is obtained. • The prediction range of our code is extended up to average burnup of 60 MWd/kgU. - Abstract: The version 2.0 of the DIONISIO code, that incorporates diverse new aspects, has been recently developed. One of them is referred to the code architecture that allows taking into account the axial variation of the conditions external to the rod. With this purpose, the rod is divided into a number of axial segments. In each one the program considers the system formed by a pellet and the corresponding cladding portion and solves the numerous phenomena that take place under the local conditions of linear power and coolant temperature, which are given as input parameters. To do this a bi-dimensional domain in the r–z plane is considered where cylindrical symmetry and also symmetry with respect to the pellet mid-plane are assumed. The results obtained for this representative system are assumed valid for the complete segment. The program thus produces in each rod section the values of the temperature, stress, strain, among others as outputs, as functions of the local coordinates r and z. Then, the general rod parameters (internal rod pressure, amount of fission gas released, pellet stack elongation, etc.) are evaluated. Moreover, new calculation tools designed to extend the application range of the code to high burnup, which were reported elsewhere, have also been incorporated to DIONISIO 2.0 in recent times. With these improvements, the code results are compared with some 33 experiments compiled in the IFPE data base, that cover more than 380 fuel rods irradiated up to average burnup levels of 40–60 MWd/kgU. The results of these

  17. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  18. Ultrasonic signal processing for sizing under-clad flaws

    International Nuclear Information System (INIS)

    Shankar, R.; Paradiso, T.J.; Lane, S.S.; Quinn, J.R.

    1985-01-01

    Ultrasonic digital data were collected from underclad cracks in sample pressure vessel specimen blocks. These blocks were weld cladded under different processes to simulate actual conditions in US Pressure Water Reactors. Each crack was represented by a flaw-echo dynamic curve which is a plot of the transducer motion on the surface as a function of the ultrasonic response into the material. Crack depth sizing was performed by identifying in the dynamic curve the crack tip diffraction signals from the upper and lower tips. This paper describes the experimental procedure, digital signal processing methods used and algorithms developed for crack depth sizing

  19. Learning process mapping heuristics under stochastic sampling overheads

    Science.gov (United States)

    Ieumwananonthachai, Arthur; Wah, Benjamin W.

    1991-01-01

    A statistical method was developed previously for improving process mapping heuristics. The method systematically explores the space of possible heuristics under a specified time constraint. Its goal is to get the best possible heuristics while trading between the solution quality of the process mapping heuristics and their execution time. The statistical selection method is extended to take into consideration the variations in the amount of time used to evaluate heuristics on a problem instance. The improvement in performance is presented using the more realistic assumption along with some methods that alleviate the additional complexity.

  20. Discrete neurochemical coding of distinguishable motivational processes: insights from nucleus accumbens control of feeding.

    Science.gov (United States)

    Baldo, Brian A; Kelley, Ann E

    2007-04-01

    The idea that nucleus accumbens (Acb) dopamine transmission contributes to the neural mediation of reward, at least in a general sense, has achieved wide acceptance. Nevertheless, debate remains over the precise nature of dopamine's role in reward and even over the nature of reward itself. In the present article, evidence is reviewed from studies of food intake, feeding microstructure, instrumental responding for food reinforcement, and dopamine efflux associated with feeding, which suggests that reward processing in the Acb is best understood as an interaction among distinct processes coded by discrete neurotransmitter systems. In agreement with several theories of Acb dopamine function, it is proposed here that allocation of motor effort in seeking food or food-associated conditioned stimuli can be dissociated from computations relevant to the hedonic evaluation of food during the consummatory act. The former appears to depend upon Acb dopamine transmission and the latter upon striatal opioid peptide release. Moreover, dopamine transmission may play a role in 'stamping in' associations between motor acts and goal attainment and perhaps also neural representations corresponding to rewarding outcomes. Finally, evidence is reviewed that amino acid transmission specifically in the Acb shell acts as a central 'circuit breaker' to flexibly enable or terminate the consummatory act, via descending connections to hypothalamic feeding control systems. The heuristic framework outlined above may help explain why dopamine-compromising manipulations that strongly diminish instrumental goal-seeking behaviors leave consummatory activity relatively unaffected.

  1. Normalized value coding explains dynamic adaptation in the human valuation process.

    Science.gov (United States)

    Khaw, Mel W; Glimcher, Paul W; Louie, Kenway

    2017-11-28

    The notion of subjective value is central to choice theories in ecology, economics, and psychology, serving as an integrated decision variable by which options are compared. Subjective value is often assumed to be an absolute quantity, determined in a static manner by the properties of an individual option. Recent neurobiological studies, however, have shown that neural value coding dynamically adapts to the statistics of the recent reward environment, introducing an intrinsic temporal context dependence into the neural representation of value. Whether valuation exhibits this kind of dynamic adaptation at the behavioral level is unknown. Here, we show that the valuation process in human subjects adapts to the history of previous values, with current valuations varying inversely with the average value of recently observed items. The dynamics of this adaptive valuation are captured by divisive normalization, linking these temporal context effects to spatial context effects in decision making as well as spatial and temporal context effects in perception. These findings suggest that adaptation is a universal feature of neural information processing and offer a unifying explanation for contextual phenomena in fields ranging from visual psychophysics to economic choice.

  2. Physical Processes and Applications of the Monte Carlo Radiative Energy Deposition (MRED) Code

    Science.gov (United States)

    Reed, Robert A.; Weller, Robert A.; Mendenhall, Marcus H.; Fleetwood, Daniel M.; Warren, Kevin M.; Sierawski, Brian D.; King, Michael P.; Schrimpf, Ronald D.; Auden, Elizabeth C.

    2015-08-01

    MRED is a Python-language scriptable computer application that simulates radiation transport. It is the computational engine for the on-line tool CRÈME-MC. MRED is based on c++ code from Geant4 with additional Fortran components to simulate electron transport and nuclear reactions with high precision. We provide a detailed description of the structure of MRED and the implementation of the simulation of physical processes used to simulate radiation effects in electronic devices and circuits. Extensive discussion and references are provided that illustrate the validation of models used to implement specific simulations of relevant physical processes. Several applications of MRED are summarized that demonstrate its ability to predict and describe basic physical phenomena associated with irradiation of electronic circuits and devices. These include effects from single particle radiation (including both direct ionization and indirect ionization effects), dose enhancement effects, and displacement damage effects. MRED simulations have also helped to identify new single event upset mechanisms not previously observed by experiment, but since confirmed, including upsets due to muons and energetic electrons.

  3. Analysis of UO{sub 2}-BeO fuel under transient using fuel performance code

    Energy Technology Data Exchange (ETDEWEB)

    Gomes, Daniel S.; Abe, Alfredo Y.; Muniz, Rafael O.R.; Giovedi, Claudia, E-mail: dsgomes@ipen.br, E-mail: alfredo@ctmsp.mar.mil.br, E-mail: rafael.orm@gmail.com, E-mail: claudia.giovedi@ctmsp.mar.mil.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Universidade de São Paulo (USP), São Paulo, SP (Brazil). Departamento de Engenharia Naval e Oceânica

    2017-11-01

    Recent research has appointed the need to replace the classic fuel concept, used in light water reactors. Uranium dioxide has a weak point due to the low thermal conductivity, that produce high temperatures on the fuel. The ceramic composite fuel formed of uranium dioxide (UO{sub 2}), with the addition of beryllium oxide (BeO), presents high thermal conductivity compared with UO{sub 2}. The oxidation of zirconium generates hydrogen gas that can create a detonation condition. One of the preferred options are the ferritic alloys formed of iron-chromium and aluminum (FeCrAl), that should avoid the hydrogen release due to oxidation. In general, the FeCrAl alloys containing 10 - 20Cr, 3 - 5Al, and 0 - 0.12Y in weight percent. The FeCrAl alloys should exhibit a slow oxidation kinetics due to chemical composition. Resistance to oxidation in the presence of steam is improved as a function of the content of chromium and aluminum. In this way, the thermal and mechanical properties of the UO{sub 2}-BeO-10%vol, composite fuel were coupled with FeCrAl alloys and added to the fuel codes. In this work, we examine the fuel rod behavior of UO{sub 2}-10%vol-BeO/FeCrAl, including a simulated transient of reactivity. The fuels behavior shown reduced temperature with UO{sub 2}-BeO/Zr, UO{sub 2}-BeO/FeCrAl also were compared with UO{sub 2}/Zr system. The case reactivity initiated accident analyzed, reproducing the fuel rod called VA-1 using UO{sub 2}/Zr alloys and compared with UO{sub 2}-BeO/FeCrAl. (author)

  4. 9 CFR 355.25 - Canning with heat processing and hermetically sealed containers; closures; code marking; heat...

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Canning with heat processing and hermetically sealed containers; closures; code marking; heat processing; incubation. 355.25 Section 355.25... IDENTIFICATION AS TO CLASS, QUALITY, QUANTITY, AND CONDITION Inspection Procedure § 355.25 Canning with heat...

  5. Group-buying inventory policy with demand under Poisson process

    Directory of Open Access Journals (Sweden)

    Tammarat Kleebmek

    2016-02-01

    Full Text Available The group-buying is the modern business of selling in the uncertain market. With an objective to minimize costs for sellers arising from ordering and reordering, we present in this paper the group buying inventory model, with the demand governed by a Poisson process and the product sale distributed as Binomial distribution. The inventory level is under continuous review, while the lead time is fixed. A numerical example is illustrated.

  6. Model, parameter and code of environmental dispersion of gaseous effluent under normal operation from nuclear power plant with 600 MWe

    International Nuclear Information System (INIS)

    Hu Erbang; Gao Zhanrong

    1998-06-01

    The model of environmental dispersion of gaseous effluence under normal operation from a nuclear power plant with 600 MWe is established to give a mathematical expression of annual mean atmospheric dispersion factor under mixing release condition based on quality assessment of radiological environment for 30 years of Chinese nuclear industry. In calculation, the impact from calm and other following factors have been taken into account: mixing layer, dry and wet deposition, radioactive decay and buildings. The doses caused from the following exposure pathways are also given by this model: external exposure from immersion cloud and ground deposition, internal exposure due to inhalation and ingestion. The code is named as ROULEA. It contains four modules, i.e. INPUT, ANRTRI, CHIQV and DOSE for calculating 4-dimension joint frequency, annual mean atmospheric dispersion factor and doses

  7. Establishment of Technical Collaboration basis between Korea and France for the development of severe accident assessment computer code under high burnup condition

    International Nuclear Information System (INIS)

    Kim, H. D.; Kim, D. H.; Park, S. Y.; Park, J. H.

    2005-10-01

    This project was performed by KAERI in the frame of construction of the international cooperative basis on the nuclear energy. This was supported from MOST under the title of 'Establishment of Technical Collaboration basis between Korea and France for the development of severe accident assessment computer code under high burn up condition'. The current operating NPP are converting the burned fuel to the wasted fuel after burn up of 40 GWD/MTU. But in Korea, burn up of more than 60 GWD/MTU will be expected because of the high fuel efficiency but also cost saving for storing the wasted fuel safely. The domestic research for the purpose of developing the fuel and the cladding that can be used under the high burn up condition up to 100 GWD/MTU is in progress now. But the current computer code adopts the model and the data that are valid only up to the 40 GWD/MTU at most. Therefore the current model could not take into account the phenomena that may cause differences in the fission product release behavior or in the core damage process due to the high burn up operation (more than 40 GWD/MTU). To evaluate the safety of the NPP with the high burn up fuel, the improvement of current severe accident code against the high burn up condition is an important research item. Also it should start without any delay. Therefore, in this study, an expert group was constructed to establish the research basis for the severe accident under high burn up conditions. From this expert group, the research items regarding the high burn up condition were selected and identified through discussion and technical seminars. Based on these selected items, the meeting between IRSN and KAERI to find out the cooperative research items on the severe accident under the high burn up condition was held in the IRSN headquater in Paris. After the meeting, KAERI and IRSN agreed to cooperate with each other on the selected items, and to co-host the international seminar, and to develop the model and to

  8. Contractual Penalty and the Right to Payment for Delays Caused by Force Majeure in Czech Civil Law under the New Civil Code

    Directory of Open Access Journals (Sweden)

    Janku Martin

    2015-12-01

    Full Text Available In the context of the conclusion of contracts between entrepreneurs under the Czech Civil Code, it is a relatively common arrangement that the parties disclaim any and all liability for damage arising from non-compliance with contractual obligations, if they can prove that this failure was due to an obstacle independent of their will. This circumstance excluding liability for the damage is called force majeure by the theory. In many countries this circumstance is ruled upon directly by the legislation (höhere Gewalt, vis major. The Czech regulations represented by the new Civil Code of 2012 (CivC, however, contains only a framework provision that mentions discharging reasons. The paper deals with the – rather disputable – issue that the force majeure does not affect the obligation to pay a contractual penalty under the new rules of the CivC. It should be therefore reflected in the arrangements for contractual penalties inter partes. To this effect the paper analyses the concepts of contractual penalties and force majeure in civil law legislation. Afterwards it compares their mutual relationship and impact on the obligations of the Contracting Parties. Finally, it draws recommendations for practice from the perspective of the contracting process.

  9. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    Science.gov (United States)

    Kermek, Dragutin; Novak, Matija

    2016-01-01

    In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…

  10. INTRANS. A computer code for the non-linear structural response analysis of reactor internals under transient loads

    International Nuclear Information System (INIS)

    Ramani, D.T.

    1977-01-01

    The 'INTRANS' system is a general purpose computer code, designed to perform linear and non-linear structural stress and deflection analysis of impacting or non-impacting nuclear reactor internals components coupled with reactor vessel, shield building and external as well as internal gapped spring support system. This paper describes in general a unique computational procedure for evaluating the dynamic response of reactor internals, descretised as beam and lumped mass structural system and subjected to external transient loads such as seismic and LOCA time-history forces. The computational procedure is outlined in the INTRANS code, which computes component flexibilities of a discrete lumped mass planar model of reactor internals by idealising an assemblage of finite elements consisting of linear elastic beams with bending, torsional and shear stiffnesses interacted with external or internal linear as well as non-linear multi-gapped spring support system. The method of analysis is based on the displacement method and the code uses the fourth-order Runge-Kutta numerical integration technique as a basis for solution of dynamic equilibrium equations of motion for the system. During the computing process, the dynamic response of each lumped mass is calculated at specific instant of time using well-known step-by-step procedure. At any instant of time then, the transient dynamic motions of the system are held stationary and based on the predicted motions and internal forces of the previous instant. From which complete response at any time-step of interest may then be computed. Using this iterative process, the relationship between motions and internal forces is satisfied step by step throughout the time interval

  11. Identification and Functional Analysis of Long Intergenic Non-coding RNAs Underlying Intramuscular Fat Content in Pigs

    Directory of Open Access Journals (Sweden)

    Cheng Zou

    2018-03-01

    Full Text Available Intramuscular fat (IMF content is an important trait that can affect pork quality. Previous studies have identified many genes that can regulate IMF. Long intergenic non-coding RNAs (lincRNAs are emerging as key regulators in various biological processes. However, lincRNAs related to IMF in pig are largely unknown, and the mechanisms by which they regulate IMF are yet to be elucidated. Here we reconstructed 105,687 transcripts and identified 1,032 lincRNAs in pig longissimus dorsi muscle (LDM of four stages with different IMF contents based on published RNA-seq. These lincRNAs show typical characteristics such as shorter length and lower expression compared with protein-coding genes. Combined with methylation data, we found that both the promoter and genebody methylation of lincRNAs can negatively regulate lincRNA expression. We found that lincRNAs exhibit high correlation with their protein-coding neighbors in expression. Co-expression network analysis resulted in eight stage-specific modules, gene ontology and pathway analysis of them suggested that some lincRNAs were involved in IMF-related processes, such as fatty acid metabolism and peroxisome proliferator-activated receptor signaling pathway. Furthermore, we identified hub lincRNAs and found six of them may play important roles in IMF development. This work detailed some lincRNAs which may affect of IMF development in pig, and facilitated future research on these lincRNAs and molecular assisted breeding for pig.

  12. Nationwide Risk-Based PCB Remediation Waste Disposal Approvals under Title 40 of the Code of Federal Regulations (CFR) Section 761.61(c)

    Science.gov (United States)

    This page contains information about Nationwide Risk-Based Polychlorinated Biphenyls (PCBs) Remediation Waste Disposal Approvals under Title 40 of the Code of Federal Regulations (CFR) Section 761.61(c)

  13. Nationwide Enviro Jet PCB Decontamination Approval and Notifications under Title 40 of the Code of Federal Regulations (CFR) Section 761.79(h)

    Science.gov (United States)

    This page contains information about approvals and notifications for Enviro Jet to Decontaminate PCB-contaminated natural gas pipelines under Title 40 of the Code of Federal Regulations (CFR) Section 761.79(h)

  14. Multiple optical code-label processing using multi-wavelength frequency comb generator and multi-port optical spectrum synthesizer.

    Science.gov (United States)

    Moritsuka, Fumi; Wada, Naoya; Sakamoto, Takahide; Kawanishi, Tetsuya; Komai, Yuki; Anzai, Shimako; Izutsu, Masayuki; Kodate, Kashiko

    2007-06-11

    In optical packet switching (OPS) and optical code division multiple access (OCDMA) systems, label generation and processing are key technologies. Recently, several label processors have been proposed and demonstrated. However, in order to recognize N different labels, N separate devices are required. Here, we propose and experimentally demonstrate a large-scale, multiple optical code (OC)-label generation and processing technology based on multi-port, a fully tunable optical spectrum synthesizer (OSS) and a multi-wavelength electro-optic frequency comb generator. The OSS can generate 80 different OC-labels simultaneously and can perform 80-parallel matched filtering. We also demonstrated its application to OCDMA.

  15. Redox processes at a nanostructured interface under strong electric fields.

    Science.gov (United States)

    Steurer, Wolfram; Surnev, Svetlozar; Netzer, Falko P; Sementa, Luca; Negreiros, Fabio R; Barcaro, Giovanni; Durante, Nicola; Fortunelli, Alessandro

    2014-09-21

    Manipulation of chemistry and film growth via external electric fields is a longstanding goal in surface science. Numerous systems have been predicted to show such effects but experimental evidence is sparse. Here we demonstrate in a custom-designed UHV apparatus that the application of spatially extended, homogeneous, very high (>1 V nm(-1)) DC-fields not only changes the system energetics but triggers dynamic processes which become important much before static contributions appreciably modify the potential energy landscape. We take a well characterized ultrathin NiO film on a Ag(100) support as a proof-of-principle test case, and show how it gets reduced to supported Ni clusters under fields exceeding the threshold of +0.9 V nm(-1). Using an effective model, we trace the observed interfacial redox process down to a dissociative electron attachment resonant mechanism. The proposed approach can be easily implemented and generally applied to a wide range of interfacial systems, thus opening new opportunities for the manipulation of film growth and reaction processes at solid surfaces under strong external fields.

  16. Regional TEC model under quiet geomagnetic conditions and low-to-moderate solar activity based on CODE GIMs

    Science.gov (United States)

    Feng, Jiandi; Jiang, Weiping; Wang, Zhengtao; Zhao, Zhenzhen; Nie, Linjuan

    2017-08-01

    Global empirical total electron content (TEC) models based on TEC maps effectively describe the average behavior of the ionosphere. However, the accuracy of these global models for a certain region may not be ideal. Due to the number and distribution of the International GNSS Service (IGS) stations, the accuracy of TEC maps is geographically different. The modeling database derived from the global TEC maps with different accuracy is likely one of the main reasons that limits the accuracy of the new models. Moreover, many anomalies in the ionosphere are geographic or geomagnetic dependent, and as such the accuracy of global models can deteriorate if these anomalies are not fully incorporated into the modeling approach. For regional models built in small areas, these influences on modeling are immensely weakened. Thus, the regional TEC models may better reflect the temporal and spatial variations of TEC. In our previous work (Feng et al., 2016), a regional TEC model TECM-NEC is proposed for northeast China. However, this model is only directed against the typical region of Mid-latitude Summer Nighttime Anomaly (MSNA) occurrence, which is meaningless in other regions without MSNA. Following the technique of TECM-NEC model, this study proposes another regional empirical TEC model for other regions in mid-latitudes. Taking a small area BeiJing-TianJin-Tangshan (JJT) region (37.5°-42.5° N, 115°-120° E) in China as an example, a regional empirical TEC model (TECM-JJT) is proposed using the TEC grid data from January 1, 1999 to June 30, 2015 provided by the Center for Orbit Determination in Europe (CODE) under quiet geomagnetic conditions. The TECM-JJT model fits the input CODE TEC data with a bias of 0.11TECU and a root mean square error of 3.26TECU. Result shows that the regional model TECM-JJT is consistent with CODE TEC data and GPS-TEC data.

  17. Prediction of BWR performance under the influence of Isolation Condenser-using RAMONA-4 code

    International Nuclear Information System (INIS)

    Khan, H.J.; Cheng, H.S.; Rohatgi, U.S.

    1992-01-01

    The purpose of the Boiling Water Reactor (BWR) Isolation Condenser (IC) is to passively control the reactor pressure by removing heat from the system. This type of control is expected to reduce the frequency of opening and closing of the Safety Relief Valves (SRV). A comparative analysis is done for a BWR operating with and without the influence of an IC under Main Steam Isolation Valve (MSIV) closure. A regular BWR, with forced flow and high thermal power, has been considered for analysis. In addition, the effect of ICs on the BWR performance is studied for natural convection flow at lower power and modified riser geometry. The IC is coupled to the steam dome for the steam inlet flow and the Reactor Pressure Vessel (RPV) near the feed water entrance for the condensate return flow. Transient calculations are performed using prescribed pressure set points for the SRVs and given time settings for MSIV closure. The effect of the IC on the forced flow is to reduce the rate of pressure rise and thereby decrease the cycling frequency ofthe SRVS. This is the primary objective of any operating IC in a BWR (e.g. Oyster Creek). The response of the reactor thermal and fission power, steam flow rate, collapsed liquid level, and core average void fraction are found to agree with the trend of pressure. The variations in the case of an active IC can be closely related to the creation of a time lag and changes in the cycling frequency of the SRVS. An analysis for natural convection flow in a BWR indicates that the effect of an IC on its transient performance is similar to that for the forced convection system. In this case, the MSIV closure, has resulted in a lower peak pressure due to the magnitude of reduced power. However, the effect of reduced cycling frequency of the SRV due to the IC, and the time lag between the events, are comparable to that for forced convection

  18. Efficient Option Pricing under Levy Processes, with CVA and FVA

    Directory of Open Access Journals (Sweden)

    Jimmy eLaw

    2015-07-01

    Full Text Available We generalize the Piterbarg (2010 model to include 1 bilateral default risk as in Burgard and Kjaer (2012, and 2 jumps in the dynamics of the underlying asset using general classes of L'evy processes of exponential type. We develop an efficient explicit-implicit scheme for European options and barrier options taking CVA-FVA into account. We highlight the importance of this work in the context of trading, pricing and management a derivative portfolio given the trajectory of regulations.

  19. Citizen Action Can Help the Code Adoption Process for Radon-Resistant New Construction: Decatur, Alabama

    Science.gov (United States)

    Adopting a code requiring radon-resistant new construction (RRNC) in Decatur, Alabama, took months of effort by four people. Their actions demonstrate the influence that passionate residents can have on reversing a city council’s direction.

  20. Development of a two-dimensional simulation code (koad) including atomic processes for beam direct energy conversion

    International Nuclear Information System (INIS)

    Yamamoto, Y.; Yoshikawa, K.; Hattori, Y.

    1987-01-01

    A two-dimensional simulation code for the beam direct energy conversion called KVAD (Kyoto University Advanced DART) including various loss mechanisms has been developed, and shown excellent agreement with the authors' experiments using the He + beams. The beam direct energy converter (BDC) is the device to recover the kinetic energy of unneutralized ions in the neutral beam injection (NBI) system directly into electricity. The BDC is very important and essential not only to the improvements of NBI system efficiency, but also to the relaxation of high heat flux problems on the beam dump with increase of injection energies. So far no simulation code could have successfully predicted BDC experimental results. The KUAD code applies, an optimized algorithm for vector processing, the finite element method (FEM) for potential calculation, and a semi-automatic method for spatial segmentations. Since particle trajectories in the KVAD code are analytically solved, very high speed tracings of the particle could be achieved by introducing an adjacent element matrix to identify the neighboring triangle elements and electrodes. Ion space charges are also analytically calculated by the Cloud in Cell (CIC) method, as well as electron space charges. Power losses due to atomic processes can be also evaluated in the KUAD code

  1. Realistic integration of sorption processes in transport codes for long-term safety assessments

    Energy Technology Data Exchange (ETDEWEB)

    Noseck, Ulrich; Fluegge, Judith; Britz, Susan; Schneider, Anke [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany); Brendler, Vinzenz; Stockmann, Madlen; Schikora, Johannes [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany); Lampe, Michael [Frankfurt Univ. (Germany). Goethe Center for Scientific Computing

    2012-09-15

    One important aspect in long-term safety assessment is related to radionuclide transport in geologic formations. In order to assess its consequences over assessment periods of one million years numerical models describing flow and transport are applied. Sorption on mineral surfaces is the most relevant process retarding radionuclide transport. On the one hand an increased transport time might cause a decrease in radionuclide concentration by radioactive decay. On the other hand it might increase concentrations of dose-relevant daughter nuclides in decay chains. In order to treat the radionuclide sorption processes in natural systems close to reality the so-called smart K{sub d}-concept is implemented into the transport program r{sup 3}t, which is applied to large model areas and very long time scales in long-term safety assessment. In the first stage this approach is developed for a typical sedimentary system covering rock salt and clay formations in Northern Germany. The smart K{sub d}-values are based on mechanistic surface complexation models (SCM), varying in time and space and de-pending on the actual geochemical conditions, which might change in the future e. g. due to the impact of climate changes. The concept developed and introduced here is based on a feasible treatment of the most relevant geochemical parameters in the transport code as well as on a matrix of smart K{sub d}-values calculated in dependence on these parameters. The implementation of the concept comprises the selection of relevant elements and minerals to be considered, an experimental program to fill data gaps of the thermody-namic sorption database, an uncertainty and sensitivity analysis to identify the most important environmental parameters influencing sorption of long-term relevant radionu-clides, the creation of a matrix with K{sub d}-values dependent on the selected environmental parameters, and the development and realisation of the conceptual model for treatment of temporal and

  2. 78 FR 9678 - Multi-stakeholder Process To Develop a Voluntary Code of Conduct for Smart Grid Data Privacy

    Science.gov (United States)

    2013-02-11

    ... providing consumer energy use services. DATES: Tuesday, February 26, 2013 (9:30 a.m. to 4:30 p.m., Eastern... Privacy and Promoting Innovation in the Global Digital Economy \\2\\ (Privacy Blueprint). The Privacy Blueprint outlines a multi-stakeholder process for developing voluntary codes of conduct that, if adopted by...

  3. Divided Attention and Processes Underlying Sense of Agency

    Directory of Open Access Journals (Sweden)

    Wen eWen

    2016-01-01

    Full Text Available Sense of agency refers to the subjective feeling of controlling events through one’s behavior or will. Sense of agency results from matching predictions of one’s own actions with actual feedback regarding the action. Furthermore, when an action involves a cued goal, performance-based inference contributes to sense of agency. That is, if people achieve their goal, they would believe themselves to be in control. Previous studies have shown that both action-effect comparison and performance-based inference contribute to sense of agency; however, the dominance of one process over the other may shift based on task conditions such as the presence or absence of specific goals. In this study, we examined the influence of divided attention on these two processes underlying sense of agency in two conditions. In the experimental task, participants continuously controlled a moving dot for 10 s while maintaining a string of three or seven digits in working memory. We found that when there was no cued goal (no-cued-goal condition, sense of agency was impaired by high cognitive load. Contrastingly, when participants controlled the dot based on a cued goal (cued-goal-directed condition, their sense of agency was lower than in the no-cued-goal condition and was not affected by cognitive load. The results suggest that the action-effect comparison process underlying sense of agency requires attention. On the other hand, the weaker influence of divided attention in the cued-goal-directed condition could be attributed to the dominance of performance-based inference, which is probably automatic.

  4. Attention Modulates the Neural Processes Underlying Multisensory Integration of Emotion

    Directory of Open Access Journals (Sweden)

    Hao Tam Ho

    2011-10-01

    Full Text Available Integrating emotional information from multiple sensory modalities is generally assumed to be a pre-attentive process (de Gelder et al., 1999. This assumption, however, presupposes that the integrative process occurs independent of attention. Using event-potentials (ERP the present study investigated whether the neural processes underlying the integration of dynamic facial expression and emotional prosody is indeed unaffected by attentional manipulations. To this end, participants were presented with congruent and incongruent face-voice combinations (eg, an angry face combined with a neutral voice and performed different two-choice tasks in four consecutive blocks. Three of the tasks directed the participants' attention to emotion expressions in the face, the voice or both. The fourth task required participants to attend to the synchronicity between voice and lip movements. The results show divergent modulations of early ERP components by the different attentional manipulations. For example, when attention was directed to the face (or the voice, incongruent stimuli elicited a reduced N1 as compared to congruent stimuli. This effect was absent, when attention was diverted away from the emotionality in both face and voice suggesting that the detection of emotional incongruence already requires attention. Based on these findings, we question whether multisensory integration of emotion occurs indeed pre-attentively.

  5. Independent component processes underlying emotions during natural music listening.

    Science.gov (United States)

    Rogenmoser, Lars; Zollinger, Nina; Elmer, Stefan; Jäncke, Lutz

    2016-09-01

    The aim of this study was to investigate the brain processes underlying emotions during natural music listening. To address this, we recorded high-density electroencephalography (EEG) from 22 subjects while presenting a set of individually matched whole musical excerpts varying in valence and arousal. Independent component analysis was applied to decompose the EEG data into functionally distinct brain processes. A k-means cluster analysis calculated on the basis of a combination of spatial (scalp topography and dipole location mapped onto the Montreal Neurological Institute brain template) and functional (spectra) characteristics revealed 10 clusters referring to brain areas typically involved in music and emotion processing, namely in the proximity of thalamic-limbic and orbitofrontal regions as well as at frontal, fronto-parietal, parietal, parieto-occipital, temporo-occipital and occipital areas. This analysis revealed that arousal was associated with a suppression of power in the alpha frequency range. On the other hand, valence was associated with an increase in theta frequency power in response to excerpts inducing happiness compared to sadness. These findings are partly compatible with the model proposed by Heller, arguing that the frontal lobe is involved in modulating valenced experiences (the left frontal hemisphere for positive emotions) whereas the right parieto-temporal region contributes to the emotional arousal. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  6. Probe code: a set of programs for processing and analysis of the left ventricular function - User's manual

    International Nuclear Information System (INIS)

    Piva, R.M.V.

    1987-01-01

    The User's Manual of the Probe Code is an addendum to the M.Sc. thesis entitled A Microcomputer System of Nuclear Probe to Check the Left Ventricular Function. The Probe Code is a software which was developed for processing and off-line analysis curves from the Left Ventricular Function, that were obtained in vivo. These curves are produced by means of an external scintigraph probe, which was collimated and put on the left ventricule, after a venous inoculation of Tc-99 m. (author)

  7. Performance analysis of spectral-phase-encoded optical code-division multiple-access system regarding the incorrectly decoded signal as a nonstationary random process

    Science.gov (United States)

    Yan, Meng; Yao, Minyu; Zhang, Hongming

    2005-11-01

    The performance of a spectral-phase-encoded (SPE) optical code-division multiple-access (OCDMA) system is analyzed. Regarding the incorrectly decoded signal (IDS) as a nonstationary random process, we derive a novel probability distribution for it. The probability distribution of the IDS is considered a chi-squared distribution with degrees of freedom r=1, which is more reasonable and accurate than in previous work. The bit error rate (BER) of an SPE OCDMA system under multiple-access interference is evaluated. Numerical results show that the system can sustain very low BER even when there are multiple simultaneous users, and as the code length becomes longer or the initial pulse becomes shorter, the system performs better.

  8. Introduction of SCIENCE code package

    International Nuclear Information System (INIS)

    Lu Haoliang; Li Jinggang; Zhu Ya'nan; Bai Ning

    2012-01-01

    The SCIENCE code package is a set of neutronics tools based on 2D assembly calculations and 3D core calculations. It is made up of APOLLO2F, SMART and SQUALE and used to perform the nuclear design and loading pattern analysis for the reactors on operation or under construction of China Guangdong Nuclear Power Group. The purpose of paper is to briefly present the physical and numerical models used in each computation codes of the SCIENCE code pack age, including the description of the general structure of the code package, the coupling relationship of APOLLO2-F transport lattice code and SMART core nodal code, and the SQUALE code used for processing the core maps. (authors)

  9. Studying the co-evolution of production and test code in open source and industrial developer test processes through repository mining

    NARCIS (Netherlands)

    Zaidman, A.; Van Rompaey, B.; Van Deursen, A.; Demeyer, S.

    2010-01-01

    Many software production processes advocate rigorous development testing alongside functional code writing, which implies that both test code and production code should co-evolve. To gain insight in the nature of this co-evolution, this paper proposes three views (realized by a tool called TeMo)

  10. Barriers to data quality resulting from the process of coding health information to administrative data: a qualitative study.

    Science.gov (United States)

    Lucyk, Kelsey; Tang, Karen; Quan, Hude

    2017-11-22

    Administrative health data are increasingly used for research and surveillance to inform decision-making because of its large sample sizes, geographic coverage, comprehensivity, and possibility for longitudinal follow-up. Within Canadian provinces, individuals are assigned unique personal health numbers that allow for linkage of administrative health records in that jurisdiction. It is therefore necessary to ensure that these data are of high quality, and that chart information is accurately coded to meet this end. Our objective is to explore the potential barriers that exist for high quality data coding through qualitative inquiry into the roles and responsibilities of medical chart coders. We conducted semi-structured interviews with 28 medical chart coders from Alberta, Canada. We used thematic analysis and open-coded each transcript to understand the process of administrative health data generation and identify barriers to its quality. The process of generating administrative health data is highly complex and involves a diverse workforce. As such, there are multiple points in this process that introduce challenges for high quality data. For coders, the main barriers to data quality occurred around chart documentation, variability in the interpretation of chart information, and high quota expectations. This study illustrates the complex nature of barriers to high quality coding, in the context of administrative data generation. The findings from this study may be of use to data users, researchers, and decision-makers who wish to better understand the limitations of their data or pursue interventions to improve data quality.

  11. Stochastic analysis in production process and ecology under uncertainty

    CERN Document Server

    Bieda, Bogusław

    2014-01-01

    The monograph addresses a problem of stochastic analysis based on the uncertainty assessment by simulation and application of this method in ecology and steel industry under uncertainty. The first chapter defines the Monte Carlo (MC) method and random variables in stochastic models. Chapter two deals with the contamination transport in porous media. Stochastic approach for Municipal Solid Waste transit time contaminants modeling using MC simulation has been worked out. The third chapter describes the risk analysis of the waste to energy facility proposal for Konin city, including the financial aspects. Environmental impact assessment of the ArcelorMittal Steel Power Plant, in Kraków - in the chapter four - is given. Thus, four scenarios of the energy mix production processes were studied. Chapter five contains examples of using ecological Life Cycle Assessment (LCA) - a relatively new method of environmental impact assessment - which help in preparing pro-ecological strategy, and which can lead to reducing t...

  12. The Analysis of the Pre-Emption Right under the Contract of Sale in the Regulation of New Civil Code

    Directory of Open Access Journals (Sweden)

    Mirela Costache

    2011-05-01

    Full Text Available In this paper we will keep under review the specificity of the reported pre-emption right to the sale contract, according to the article 1730-1740 of the New Civil Code. With the entry into force of the new future regulation, the pre-emption right will acquire a separate status, being currently known that the legal status of the right under the review is diverse; there are many legal provisions which provide this right in various areas, being excedentary to the sale contract, such as culture, privatization, franchising, intellectual property. According to the analysis of the future legal deposition, it shows that pre-emption right may have as a source both the law and the contract, in this case it is referred to the legal and conventional right of pre-emption. We note also that, in light of the new regulations, the mechanism for exercising the right of pre-emption is similar to the one applicable to the right of preference. Objectives: The purpose of this paper is to focus on the usefulness of this new legislative measure designed to establish a proper legal support specific to the holder of this right in the conclusion of a contract in relation to third parties. Approach: This topic emphasizes the use of the following methods: observation, comparison and interpretation of laws.

  13. Transcending Rationalism and Constructivism: Chinese Leaders’ Operational Codes, Socialization Processes, and Multilateralism after the Cold War

    DEFF Research Database (Denmark)

    He, Kai; Feng, Huiyun

    2015-01-01

    ’ argument to explain China’s pro-multilateralist diplomacy after the Cold War. Using operational code analysis to examine belief changes across three generations of Chinese leadership and on different occasions, we argue that China’s pro-multilateralist behavior is a product of ‘superficial socialization......This paper challenges both rationalist and constructivist approaches in explaining China’s foreign policy behavior toward multilateral institutions after the Cold War. Borrowing insights from socialization theory and operational code analysis, this paper suggests a ‘superficial socialization...

  14. Cognitive Processes in Decisions Under Risk Are Not the Same As in Decisions Under Uncertainty

    Directory of Open Access Journals (Sweden)

    Kirsten G Volz

    2012-07-01

    Full Text Available We deal with risk versus uncertainty, a distinction that is of fundamental importance for cognitive neuroscience yet largely neglected. In a world of risk (small world, all alternatives, consequences, and probabilities are known. In uncertain (large worlds, some of this information is unknown or unknowable. Most of cognitive neuroscience studies exclusively study the neural correlates for decisions under risk (e.g., lotteries, with the tacit implication that understanding these would lead to an understanding of decision making in general. First, we show that normative strategies for decisions under risk do not generalize to uncertain worlds, where simple heuristics are often the more accurate strategies. Second, we argue that the cognitive processes for making decisions in a world of risk are not the same as those for dealing with uncertainty. Because situations with known risks are the exception rather than the rule in human evolution, it is unlikely that our brains are adapted to them. We therefore suggest a paradigm shift towards studying decision processes in uncertain worlds and provide first examples.

  15. Computer modelling of the WWER fuel elements under high burnup conditions by the computer codes PIN-W and RODQ2D

    International Nuclear Information System (INIS)

    Valach, M.; Zymak, J.; Svoboda, R.

    1997-01-01

    This paper presents the development status of the computer codes for the WWER fuel elements thermomechanical behavior modelling under high burnup conditions at the Nuclear Research Institute Rez. The accent is given on the analysis of the results from the parametric calculations, performed by the programmes PIN-W and RODQ2D, rather than on their detailed theoretical description. Several new optional correlations for the UO2 thermal conductivity with degradation effect caused by burnup were implemented into the both codes. Examples of performed calculations document differences between previous and new versions of both programmes. Some recommendations for further development of the codes are given in conclusion. (author). 6 refs, 9 figs

  16. Computer modelling of the WWER fuel elements under high burnup conditions by the computer codes PIN-W and RODQ2D

    Energy Technology Data Exchange (ETDEWEB)

    Valach, M; Zymak, J; Svoboda, R [Nuclear Research Inst. Rez plc, Rez (Czech Republic)

    1997-08-01

    This paper presents the development status of the computer codes for the WWER fuel elements thermomechanical behavior modelling under high burnup conditions at the Nuclear Research Institute Rez. The accent is given on the analysis of the results from the parametric calculations, performed by the programmes PIN-W and RODQ2D, rather than on their detailed theoretical description. Several new optional correlations for the UO2 thermal conductivity with degradation effect caused by burnup were implemented into the both codes. Examples of performed calculations document differences between previous and new versions of both programmes. Some recommendations for further development of the codes are given in conclusion. (author). 6 refs, 9 figs.

  17. Neural processes underlying cultural differences in cognitive persistence.

    Science.gov (United States)

    Telzer, Eva H; Qu, Yang; Lin, Lynda C

    2017-08-01

    Self-improvement motivation, which occurs when individuals seek to improve upon their competence by gaining new knowledge and improving upon their skills, is critical for cognitive, social, and educational adjustment. While many studies have delineated the neural mechanisms supporting extrinsic motivation induced by monetary rewards, less work has examined the neural processes that support intrinsically motivated behaviors, such as self-improvement motivation. Because cultural groups traditionally vary in terms of their self-improvement motivation, we examined cultural differences in the behavioral and neural processes underlying motivated behaviors during cognitive persistence in the absence of extrinsic rewards. In Study 1, 71 American (47 females, M=19.68 years) and 68 Chinese (38 females, M=19.37 years) students completed a behavioral cognitive control task that required cognitive persistence across time. In Study 2, 14 American and 15 Chinese students completed the same cognitive persistence task during an fMRI scan. Across both studies, American students showed significant declines in cognitive performance across time, whereas Chinese participants demonstrated effective cognitive persistence. These behavioral effects were explained by cultural differences in self-improvement motivation and paralleled by increasing activation and functional coupling between the inferior frontal gyrus (IFG) and ventral striatum (VS) across the task among Chinese participants, neural activation and coupling that remained low in American participants. These findings suggest a potential neural mechanism by which the VS and IFG work in concert to promote cognitive persistence in the absence of extrinsic rewards. Thus, frontostriatal circuitry may be a neurobiological signal representing intrinsic motivation for self-improvement that serves an adaptive function, increasing Chinese students' motivation to engage in cognitive persistence. Copyright © 2017 Elsevier Inc. All rights

  18. Proposal of flexible atomic and molecular process management for Monte Carlo impurity transport code based on object oriented method

    International Nuclear Information System (INIS)

    Asano, K.; Ohno, N.; Takamura, S.

    2001-01-01

    Monte Carlo simulation code on impurity transport has been developed by several groups to be utilized mainly for fusion related edge plasmas. State of impurity particle is determined by atomic and molecular processes such as ionization, charge exchange in plasma. A lot of atomic and molecular processes have been considered because the edge plasma has not only impurity atoms, but also impurity molecules mainly related to chemical erosion of carbon materials, and their cross sections have been given experimentally and theoretically. We need to reveal which process is essential in a given edge plasma condition. Monte Carlo simulation code, which takes such various atomic and molecular processes into account, is necessary to investigate the behavior of impurity particle in plasmas. Usually, the impurity transport simulation code has been intended for some specific atomic and molecular processes so that the introduction of a new process forces complicated programming work. In order to evaluate various proposed atomic and molecular processes, a flexible management of atomic and molecular reaction should be established. We have developed the impurity transport simulation code based on object-oriented method. By employing object-oriented programming, we can handle each particle as 'object', which enfolds data and procedure function itself. A user (notice, not programmer) can define property of each particle species and the related atomic and molecular processes and then each 'object' is defined by analyzing this information. According to the relation among plasma particle species, objects are connected with each other and change their state by themselves. Dynamic allocation of these objects to program memory is employed to adapt for arbitrary number of species and atomic/molecular reactions. Thus we can treat arbitrary species and process starting from, for instance, methane and acetylene. Such a software procedure would be useful also for industrial application plasmas

  19. Neurophysiological processes and functional neuroanatomical structures underlying proactive effects of emotional conflicts.

    Science.gov (United States)

    Schreiter, Marie Luise; Chmielewski, Witold; Beste, Christian

    2018-07-01

    There is a strong inter-relation of cognitive and emotional processes as evidenced by emotional conflict monitoring processes. In the cognitive domain, proactive effects of conflicts have widely been studied; i.e. effects of conflicts in the n-1 trial on trial n. Yet, the neurophysiological processes and associated functional neuroanatomical structures underlying such proactive effects during emotional conflicts have not been investigated. This is done in the current study combining EEG recordings with signal decomposition methods and source localization approaches. We show that an emotional conflict in the n-1 trial differentially influences processing of positive and negative emotions in trial n, but not the processing of conflicts in trial n. The dual competition framework stresses the importance of dissociable 'perceptual' and 'response selection' or cognitive control levels for interactive effects of cognition and emotion. Only once these coding levels were isolated in the neurophysiological data, processes explaining the behavioral effects were detectable. The data show that there is not only a close correspondence between theoretical propositions of the dual competition framework and neurophysiological processes. Rather, processing levels conceptualized in the framework operate in overlapping time windows, but are implemented via distinct functional neuroanatomical structures; the precuneus (BA31) and the insula (BA13). It seems that decoding of information in the precuneus, as well as the integration of information during response selection in the insula is more difficult when confronted with angry facial emotions whenever cognitive control resources have been highly taxed by previous conflicts. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Content-addressable memory processing: Multilevel coding, logical minimization, and an optical implementation

    International Nuclear Information System (INIS)

    Mirsalehi, M.M.; Gaylord, T.K.

    1986-01-01

    This paper describes the effect of coding scheme on the number of reference patterns that need to be stored in a content-addressable memory. It is shown that residue number system in conjunction with multilevel coding and logical minimization significantly reduces the number of reference patterns required for implementation of an operation. The number of reference patterns and the total amount of information that needs to be stored are determined for practical cases of 16-bit and 32-bit fixed-point addition and multiplication. The storage requirements were found to be achievable with the state-of-the-art memory technologies. An optical holographical processor capable of parallel-input/parallel-output operation is described

  1. Vectorization of KENO IV code and an estimate of vector-parallel processing

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Higuchi, Kenji; Katakura, Jun-ichi; Kurita, Yutaka.

    1986-10-01

    The multi-group criticality safety code KENO IV has been vectorized and tested on FACOM VP-100 vector processor. At first the vectorized KENO IV on a scalar processor became slower than the original one by a factor of 1.4 because of the overhead introduced by the vectorization. Making modifications of algorithms and techniques for vectorization, the vectorized version has become faster than the original one by a factor of 1.4 and 3.0 on the vector processor for sample problems of complex and simple geometries, respectively. For further speedup of the code, some improvements on compiler and hardware, especially on addition of Monte Carlo pipelines to the vector processor, are discussed. Finally a pipelined parallel processor system is proposed and its performance is estimated. (author)

  2. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  3. Optimization and Control of Pressure Swing Adsorption Processes Under Uncertainty

    KAUST Repository

    Khajuria, Harish

    2012-03-21

    The real-time periodic performance of a pressure swing adsorption (PSA) system strongly depends on the choice of key decision variables and operational considerations such as processing steps and column pressure temporal profiles, making its design and operation a challenging task. This work presents a detailed optimization-based approach for simultaneously incorporating PSA design, operational, and control aspects under the effect of time variant and invariant disturbances. It is applied to a two-bed, six-step PSA system represented by a rigorous mathematical model, where the key optimization objective is to maximize the expected H2 recovery while achieving a closed loop product H2 purity of 99.99%, for separating 70% H2, 30% CH4 feed. The benefits over sequential design and control approach are shown in terms of closed-loop recovery improvement of more than 3%, while the incorporation of explicit/multiparametric model predictive controllers improves the closed loop performance. © 2012 American Institute of Chemical Engineers (AIChE).

  4. Creative Industries: Development Processes Under Contemporary Conditions of Globalization

    Directory of Open Access Journals (Sweden)

    Valerija Kontrimienė

    2017-06-01

    Full Text Available The article deals with the processes of developing creative industries under conditions of a growth in the worldwide economy and globalization, discloses the role of the sector of creative industries and shows its place in the system of the modern global economy. The paper presents a comparative analysis of theories and theoretical approaches intended for the sector of creative industries and its development as well as defines regularities and specificities characteristic of the development of creative industries. Particular attention is shifted on the growth and development of creative industries considering the current challenges of globalization and on the most important specificities of the developing sector in the context of the challenges of economic globalization. The paper examines the trends reflecting the place of the sector of creative industries in the economy of the modern world, including the tendencies indicating changes in the export of the products created in this sector. The article considers the issues of developing creative industries and reveals priorities of future research.

  5. Gaussian process regression for sensor networks under localization uncertainty

    Science.gov (United States)

    Jadaliha, M.; Xu, Yunfei; Choi, Jongeun; Johnson, N.S.; Li, Weiming

    2013-01-01

    In this paper, we formulate Gaussian process regression with observations under the localization uncertainty due to the resource-constrained sensor networks. In our formulation, effects of observations, measurement noise, localization uncertainty, and prior distributions are all correctly incorporated in the posterior predictive statistics. The analytically intractable posterior predictive statistics are proposed to be approximated by two techniques, viz., Monte Carlo sampling and Laplace's method. Such approximation techniques have been carefully tailored to our problems and their approximation error and complexity are analyzed. Simulation study demonstrates that the proposed approaches perform much better than approaches without considering the localization uncertainty properly. Finally, we have applied the proposed approaches on the experimentally collected real data from a dye concentration field over a section of a river and a temperature field of an outdoor swimming pool to provide proof of concept tests and evaluate the proposed schemes in real situations. In both simulation and experimental results, the proposed methods outperform the quick-and-dirty solutions often used in practice.

  6. Sexual picture processing interferes with decision-making under ambiguity.

    Science.gov (United States)

    Laier, Christian; Pawlikowski, Mirko; Brand, Matthias

    2014-04-01

    Many people watch sexually arousing material on the Internet in order to receive sexual arousal and gratification. When browsing for sexual stimuli, individuals have to make several decisions, all possibly leading to positive or negative consequences. Decision-making research has shown that decisions under ambiguity are influenced by consequences received following earlier decisions. Sexual arousal might interfere with the decision-making process and should therefore lead to disadvantageous decision-making in the long run. In the current study, 82 heterosexual, male participants watched sexual pictures, rated them with respect to sexual arousal, and were asked to indicate their current level of sexual arousal before and following the sexual picture presentation. Afterwards, subjects performed one of two modified versions of the Iowa Gambling Task in which sexual pictures were displayed on the advantageous and neutral pictures on the disadvantageous card decks or vice versa (n = 41/n = 41). Results demonstrated an increase of sexual arousal following the sexual picture presentation. Decision-making performance was worse when sexual pictures were associated with disadvantageous card decks compared to performance when the sexual pictures were linked to the advantageous decks. Subjective sexual arousal moderated the relationship between task condition and decision-making performance. This study emphasized that sexual arousal interfered with decision-making, which may explain why some individuals experience negative consequences in the context of cybersex use.

  7. Blind signal processing algorithms under DC biased Gaussian noise

    Science.gov (United States)

    Kim, Namyong; Byun, Hyung-Gi; Lim, Jeong-Ok

    2013-05-01

    Distortions caused by the DC-biased laser input can be modeled as DC biased Gaussian noise and removing DC bias is important in the demodulation process of the electrical signal in most optical communications. In this paper, a new performance criterion and a related algorithm for unsupervised equalization are proposed for communication systems in the environment of channel distortions and DC biased Gaussian noise. The proposed criterion utilizes the Euclidean distance between the Dirac-delta function located at zero on the error axis and a probability density function of biased constant modulus errors, where constant modulus error is defined by the difference between the system out and a constant modulus calculated from the transmitted symbol points. From the results obtained from the simulation under channel models with fading and DC bias noise abruptly added to background Gaussian noise, the proposed algorithm converges rapidly even after the interruption of DC bias proving that the proposed criterion can be effectively applied to optical communication systems corrupted by channel distortions and DC bias noise.

  8. The New Civil Process Code and the Mediation Act: The Incentive to Extrajudicial and Consensual Conflicts Resolution in Public Administration

    Directory of Open Access Journals (Sweden)

    Aline Sueli de Salles Santos

    2016-10-01

    Full Text Available The purpose of this paper is to discuss the contextual aspects of the norm, inserted in the article 174 of the New Civil Process Code and in the Mediation Act, which determines the creation of the chambers of mediation and conciliation, aiming to resolve the consensual and extrajudicial conflict in the public administration. In addition, it will also focuses on the perspectives of that legislative innovation, which tends to produce socially relevant results.

  9. The Conciliation in the State Demands as an Alternative for the Economy in the Process in New Civil Procedure Code

    Directory of Open Access Journals (Sweden)

    Eduardo Augusto Salomão Camb

    2016-12-01

    Full Text Available The present study deals with the possibility of conciliation in the demands that involve the Public Power, having as main objective the procedural economy, both as regards the faster process, as to reduction of expenses with the structure of the Judiciary. Despite of the unavailability of the public interest, seeks out the adequacy of this principle with the principles in the New Civil Procedure Code, which encourages conciliation as a means of settling disputes.

  10. A phase transition in the first passage of a Brownian process through a fluctuating boundary with implications for neural coding.

    Science.gov (United States)

    Taillefumier, Thibaud; Magnasco, Marcelo O

    2013-04-16

    Finding the first time a fluctuating quantity reaches a given boundary is a deceptively simple-looking problem of vast practical importance in physics, biology, chemistry, neuroscience, economics, and industrial engineering. Problems in which the bound to be traversed is itself a fluctuating function of time include widely studied problems in neural coding, such as neuronal integrators with irregular inputs and internal noise. We show that the probability p(t) that a Gauss-Markov process will first exceed the boundary at time t suffers a phase transition as a function of the roughness of the boundary, as measured by its Hölder exponent H. The critical value occurs when the roughness of the boundary equals the roughness of the process, so for diffusive processes the critical value is Hc = 1/2. For smoother boundaries, H > 1/2, the probability density is a continuous function of time. For rougher boundaries, H probability is concentrated on a Cantor-like set of zero measure: the probability density becomes divergent, almost everywhere either zero or infinity. The critical point Hc = 1/2 corresponds to a widely studied case in the theory of neural coding, in which the external input integrated by a model neuron is a white-noise process, as in the case of uncorrelated but precisely balanced excitatory and inhibitory inputs. We argue that this transition corresponds to a sharp boundary between rate codes, in which the neural firing probability varies smoothly, and temporal codes, in which the neuron fires at sharply defined times regardless of the intensity of internal noise.

  11. A phase transition in the first passage of a Brownian process through a fluctuating boundary with implications for neural coding

    OpenAIRE

    Taillefumier, Thibaud; Magnasco, Marcelo O.

    2013-01-01

    Finding the first time a fluctuating quantity reaches a given boundary is a deceptively simple-looking problem of vast practical importance in physics, biology, chemistry, neuroscience, economics, and industrial engineering. Problems in which the bound to be traversed is itself a fluctuating function of time include widely studied problems in neural coding, such as neuronal integrators with irregular inputs and internal noise. We show that the probability p(t) that a Gauss–Markov process will...

  12. TIMS-1: a processing code for production of group constants of heavy resonant nuclei

    International Nuclear Information System (INIS)

    Takano, Hideki; Ishiguro, Yukio; Matsui, Yasushi.

    1980-09-01

    The TIMS-1 code calculates the infinitely dilute group cross sections and the temperature dependent self-shielding factors for arbitrary values of σ 0 and R, where σ 0 is the effective background cross section of potential scattering and R the ratio of the atomic number densities for two resonant nuclei if any. This code is specifically programmed to use the evaluated nuclear data file of ENDF/B or JENDL as input data. In the unresolved resonance region, the resonance parameters and the level spacings are generated by using Monte Carlo method from the Porter-Thomas and Wigner distributions respectively. The Doppler broadened cross sections are calculated on the ultra-fine lethargy meshes of about 10 -3 -- 10 -5 using the generated and resolved resonance parameters. The effective group constants are calculated by solving the neutron slowing down equation with the use of the recurrence formula for the neutron slowing down source. The output of the calculated results is given in a format being consistent with the JAERI-Fast set (JFS) or the Standard Reactor Analysis Code (SRAC) library. Both FACOM 230/75 and M200 versions of TIMS-1 are available. (author)

  13. Report number codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.N. (ed.)

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  14. Report number codes

    International Nuclear Information System (INIS)

    Nelson, R.N.

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name

  15. CT dosimetry computer codes: Their influence on radiation dose estimates and the necessity for their revision under new ICRP radiation protection standards

    International Nuclear Information System (INIS)

    Kim, K. P.; Lee, J.; Bolch, W. E.

    2011-01-01

    Computed tomography (CT) dosimetry computer codes have been most commonly used due to their user friendliness, but with little consideration for potential uncertainty in estimated organ dose and their underlying limitations. Generally, radiation doses calculated with different CT dosimetry computer codes were comparable, although relatively large differences were observed for some specific organs or tissues. The largest difference in radiation doses calculated using different computer codes was observed for Siemens Sensation CT scanners. Radiation doses varied with patient age and sex. Younger patients and adult females receive a higher radiation dose in general than adult males for the same CT technique factors. There are a number of limitations of current CT dosimetry computer codes. These include unrealistic modelling of the human anatomy, a limited number of organs and tissues for dose calculation, inability to alter patient height and weight, and non-applicability to new CT technologies. Therefore, further studies are needed to overcome these limitations and to improve CT dosimetry. (authors)

  16. Ruthenium release modelling in air and steam atmospheres under severe accident conditions using the MAAP4 code

    International Nuclear Information System (INIS)

    Beuzet, Emilie; Lamy, Jean-Sylvestre; Perron, Hadrien; Simoni, Eric; Ducros, Gérard

    2012-01-01

    Highlights: ► We developed a new modelling of fuel oxidation and ruthenium release in the EDF version of the MAAP4 code. ► We validated this model against some VERCORS experiments. ► Ruthenium release prediction quantitatively and qualitatively well reproduced under air and steam atmospheres. - Abstract: In a nuclear power plant (NPP), a severe accident is a low probability sequence that can lead to core fusion and fission product (FP) release to the environment (source term). For instance during a loss-of-coolant accident, water vaporization and core uncovery can occur due to decay heat. These phenomena enhance core degradation and, subsequently, molten materials can relocate to the lower head of the vessel. Heat exchange between the debris and the vessel may cause its rupture and air ingress. After lower head failure, steam and air entering in the vessel can lead to degradation and oxidation of materials that are still intact in the core. Indeed, Zircaloy-4 cladding oxidation is very exothermic and fuel interaction with the cladding material can decrease its melting temperature by several hundred of Kelvin. FP release can thus be increased, noticeably that of ruthenium under oxidizing conditions. Ruthenium is of particular interest because of its high radio-toxicity due to 103 Ru and 106 Ru isotopes and its ability to form highly volatile compounds, even at room temperature, such as gaseous ruthenium tetra-oxide (RuO 4 ). It is consequently of great need to understand phenomena governing steam and air oxidation of the fuel and ruthenium release as prerequisites for the source term issues. A review of existing data on these phenomena shows relatively good understanding. In terms of oxygen affinity, the fuel is oxidized before ruthenium, from UO 2 to UO 2+x . Its oxidation is a rate-controlling surface exchange reaction with the atmosphere, so that the stoichiometric deviation and oxygen partial pressure increase. High temperatures combined with the presence

  17. A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing.

    Directory of Open Access Journals (Sweden)

    Haruo Hosoya

    2017-07-01

    Full Text Available Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009. These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance, and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.

  18. Study of counter current flow limitation model of MARS-KS and SPACE codes under Dukler's air/water flooding test conditions

    International Nuclear Information System (INIS)

    Lee, Won Woong; Kim, Min Gil; Lee, Jeong Ik; Bang, Young Seok

    2015-01-01

    In particular, CCFL(the counter current flow limitation) occurs in components such as hot leg, downcomer annulus and steam generator inlet plenum during LOCA which is possible to have flows in two opposite directions. Therefore, CCFL is one of the thermal-hydraulic models which has significant effect on the reactor safety analysis code performance. In this study, the CCFL model will be evaluated with MARS-KS based on two-phase two-field governing equations and SPACE code based on two-phase three-field governing equations. This study will be conducted by comparing MARS-KS code which is being used for evaluating the safety of a Korean Nuclear Power Plant and SPACE code which is currently under assessment for evaluating the safety of the designed nuclear power plant. In this study, comparison of the results of liquid upflow and liquid downflow rate for different gas flow rate from two code to the famous Dukler's CCFL experimental data are presented. This study will be helpful to understand the difference between system analysis codes with different governing equations, models and correlations, and further improving the accuracy of system analysis codes. In the nuclear reactor system, CCFL is an important phenomenon for evaluating the safety of nuclear reactors. This is because CCFL phenomenon can limit injection of ECCS water when CCFL occurs in components such as hot leg, downcomer annulus or steam generator inlet plenum during LOCA which is possible to flow in two opposite directions. Therefore, CCFL is one of the thermal-hydraulic models which has significant effect on the reactor safety analysis code performance. In this study, the CCFL model was evaluated with MARS-KS and SPACE codes for studying the difference between system analysis codes with different governing equations, models and correlations. This study was conducted by comparing MARS-KS and SPACE code results of liquid upflow and liquid downflow rate for different gas flow rate to the famous Dukler

  19. Global Intersection of Long Non-Coding RNAs with Processed and Unprocessed Pseudogenes in the Human Genome

    Directory of Open Access Journals (Sweden)

    Michael John Milligan

    2016-03-01

    Full Text Available Pseudogenes are abundant in the human genome and had long been thought of purely as nonfunctional gene fossils. Recent observations point to a role for pseudogenes in regulating genes transcriptionally and post-transcriptionally in human cells. To computationally interrogate the network space of integrated pseudogene and long non-coding RNA regulation in the human transcriptome, we developed and implemented an algorithm to identify all long non-coding RNA (lncRNA transcripts that overlap the genomic spans, and specifically the exons, of any human pseudogenes in either sense or antisense orientation. As inputs to our algorithm, we imported three public repositories of pseudogenes: GENCODE v17 (processed and unprocessed, Ensembl 72; Retroposed Pseudogenes V5 (processed only and Yale Pseudo60 (processed and unprocessed, Ensembl 60; two public lncRNA catalogs: Broad Institute, GENCODE v17; NCBI annotated piRNAs; and NHGRI clinical variants. The data sets were retrieved from the UCSC Genome Database using the UCSC Table Browser. We identified 2277 loci containing exon-to-exon overlaps between pseudogenes, both processed and unprocessed, and long non-coding RNA genes. Of these loci we identified 1167 with Genbank EST and full-length cDNA support providing direct evidence of transcription on one or both strands with exon-to-exon overlaps. The analysis converged on 313 pseudogene-lncRNA exon-to-exon overlaps that were bidirectionally supported by both full-length cDNAs and ESTs. In the process of identifying transcribed pseudogenes, we generated a comprehensive, positionally non-redundant encyclopedia of human pseudogenes, drawing upon multiple, and formerly disparate public pseudogene repositories. Collectively, these observations suggest that pseudogenes are pervasively transcribed on both strands and are common drivers of gene regulation.

  20. On the decoding process in ternary error-correcting output codes.

    Science.gov (United States)

    Escalera, Sergio; Pujol, Oriol; Radeva, Petia

    2010-01-01

    A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.

  1. [Dual process in large number estimation under uncertainty].

    Science.gov (United States)

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  2. Starch hydrolysis under low water conditions: a conceptual process design

    NARCIS (Netherlands)

    Veen, van der M.E.; Veelaert, S.; Goot, van der A.J.; Boom, R.M.

    2006-01-01

    A process concept is presented for the hydrolysis of starch to glucose in highly concentrated systems. Depending on the moisture content, the process consists of two or three stages. The two-stage process comprises combined thermal and enzymatic liquefaction, followed by enzymatic saccharification.

  3. Robust collaborative process interactions under system crash and network failures

    NARCIS (Netherlands)

    Wang, Lei; Wombacher, Andreas; Ferreira Pires, Luis; van Sinderen, Marten J.; Chi, Chihung

    2013-01-01

    With the possibility of system crashes and network failures, the design of robust client/server interactions for collaborative process execution is a challenge. If a business process changes its state, it sends messages to the relevant processes to inform about this change. However, server crashes

  4. Aerobic storage under dynamic conditions in activated sludge processes

    DEFF Research Database (Denmark)

    Majone, M.; Dircks, K.

    1999-01-01

    In activated sludge processes, several plant configurations (like plug-flow configuration of the aeration tanks, systems with selectors, contact-stabilization processes or SBR processes) impose a concentration gradient of the carbon sources to the biomass. As a consequence, the biomass grows unde...

  5. Disruption of Relational Processing Underlies Poor Memory for Order

    Science.gov (United States)

    Jonker, Tanya R.; MacLeod, Colin M.

    2015-01-01

    McDaniel and Bugg (2008) proposed that relatively uncommon stimuli and encoding tasks encourage elaborative encoding of individual items (item-specific processing), whereas relatively typical or common encoding tasks encourage encoding of associations among list items (relational processing). It is this relational processing that is thought to…

  6. Numerical study of furnace process of a 600 MW pulverized coal boiler under low load with SNCR application

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Q.X.; Shi, Y.; Liu, H.; Yang, C.H.; Wu, S.H. [Harbin Institute of Technology, Harbin (China)

    2013-07-01

    Numerical simulation of flow, heat transfer, and combustion process in a 600MW pulverized coal boiler under low load is performed using Computational Fluid Dynamics (CFD) code Fluent. The distributions of temperature and species were obtained and their influences on Selective non-catalytic reduction (SNCR) were analyzed. The results indicate that the furnace temperature changed significantly as the operation load declines. The furnace space with proper temperature for SNCR reaction becomes lower with decreasing of operation load. As the load falls off, the available O{sub 2}concentration for SNCR reactions rises gently and the initial NOx concentration for SNCR reactions debases slightly. These variations can have some influence on the SNCR process. For the upper furnace where the temperature is suitable for SNCR reactions, the CO concentration is close to 0 under different load. Consequently, the SNCR process will not be affected by CO based on the calculation in this work.

  7. The safety relief valve handbook design and use of process safety valves to ASME and International codes and standards

    CERN Document Server

    Hellemans, Marc

    2009-01-01

    The Safety Valve Handbook is a professional reference for design, process, instrumentation, plant and maintenance engineers who work with fluid flow and transportation systems in the process industries, which covers the chemical, oil and gas, water, paper and pulp, food and bio products and energy sectors. It meets the need of engineers who have responsibilities for specifying, installing, inspecting or maintaining safety valves and flow control systems. It will also be an important reference for process safety and loss prevention engineers, environmental engineers, and plant and process designers who need to understand the operation of safety valves in a wider equipment or plant design context. . No other publication is dedicated to safety valves or to the extensive codes and standards that govern their installation and use. A single source means users save time in searching for specific information about safety valves. . The Safety Valve Handbook contains all of the vital technical and standards informat...

  8. The intercomparison of aerosol codes

    International Nuclear Information System (INIS)

    Dunbar, I.H.; Fermandjian, J.; Gauvain, J.

    1988-01-01

    The behavior of aerosols in a reactor containment vessel following a severe accident could be an important determinant of the accident source term to the environment. Various processes result in the deposition of the aerosol onto surfaces within the containment, from where they are much less likely to be released. Some of these processes are very sensitive to particle size, so it is important to model the aerosol growth processes: agglomeration and condensation. A number of computer codes have been written to model growth and deposition processes. They have been tested against each other in a series of code comparison exercises. These exercises have investigated sensitivities to physical and numerical assumptions and have also proved a useful means of quality control for the codes. Various exercises in which code predictions are compared with experimental results are now under way

  9. INLUX-DBR - A calculation code to calculate indoor natural illuminance inside buildings under various sky conditions

    International Nuclear Information System (INIS)

    Ferraro, V.; Igawa, N.; Marinelli, V.

    2010-01-01

    A calculation code, named INLUX-DBR, is presented, which is a modified version of INLUX code, able to predict the illuminance distribution on the inside surfaces of a room with six walls and a window, and on the work plane. At each desired instant the code solves the system of the illuminance equations of each surface element, characterized by the latter's reflection coefficient and its view factors toward the other elements. In the model implemented in the code, the sky-diffuse luminance distribution, the sun beam light and the light reflected from the ground toward the room are considered. The code was validated by comparing the calculated values of illuminance with the experimental values measured inside a scale model (1:5) of a building room, in various sky conditions of overcast, clear and intermediate days. The validation is performed using the sky luminance data measured by a sky scanner and the measured beam illuminance of the sun as input data. A comparative analysis of some of the well-known calculation models of sky luminance, namely Perez, Igawa and CIE models was also carried out, comparing the code predictions and the measured values of inside illuminance in the scale model.

  10. INLUX-DBR - A calculation code to calculate indoor natural illuminance inside buildings under various sky conditions

    Energy Technology Data Exchange (ETDEWEB)

    Ferraro, V.; Igawa, N.; Marinelli, V. [Mechanical Engineering Department, University of Calabria, 87036 Arcavacata di Rende (CS) (Italy)

    2010-09-15

    A calculation code, named INLUX-DBR, is presented, which is a modified version of INLUX code, able to predict the illuminance distribution on the inside surfaces of a room with six walls and a window, and on the work plane. At each desired instant the code solves the system of the illuminance equations of each surface element, characterized by the latter's reflection coefficient and its view factors toward the other elements. In the model implemented in the code, the sky-diffuse luminance distribution, the sun beam light and the light reflected from the ground toward the room are considered. The code was validated by comparing the calculated values of illuminance with the experimental values measured inside a scale model (1:5) of a building room, in various sky conditions of overcast, clear and intermediate days. The validation is performed using the sky luminance data measured by a sky scanner and the measured beam illuminance of the sun as input data. A comparative analysis of some of the well-known calculation models of sky luminance, namely Perez, Igawa and CIE models was also carried out, comparing the code predictions and the measured values of inside illuminance in the scale model. (author)

  11. Validation of activity determination codes and nuclide vectors by using results from processing of retired components and operational waste

    International Nuclear Information System (INIS)

    Lundgren, Klas; Larsson, Arne

    2012-01-01

    Decommissioning studies for nuclear power reactors are performed in order to assess the decommissioning costs and the waste volumes as well as to provide data for the licensing and construction of the LILW repositories. An important part of this work is to estimate the amount of radioactivity in the different types of decommissioning waste. Studsvik ALARA Engineering has performed such assessments for LWRs and other nuclear facilities in Sweden. These assessments are to a large content depending on calculations, senior experience and sampling on the facilities. The precision in the calculations have been found to be relatively high close to the reactor core. Of natural reasons the precision will decline with the distance. Even if the activity values are lower the content of hard to measure nuclides can cause problems in the long term safety demonstration of LLW repositories. At the same time Studsvik is processing significant volumes of metallic and combustible waste from power stations in operation and in decommissioning phase as well as from other nuclear facilities such as research and waste treatment facilities. Combining the unique knowledge in assessment of radioactivity inventory and the large data bank the waste processing represents the activity determination codes can be validated and the waste processing analysis supported with additional data. The intention with this presentation is to highlight how the European nuclear industry jointly could use the waste processing data for validation of activity determination codes. (authors)

  12. Homological stabilizer codes

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  13. Information-Theoretic Evidence for Predictive Coding in the Face-Processing System.

    Science.gov (United States)

    Brodski-Guerniero, Alla; Paasch, Georg-Friedrich; Wollstadt, Patricia; Özdemir, Ipek; Lizier, Joseph T; Wibral, Michael

    2017-08-23

    Predictive coding suggests that the brain infers the causes of its sensations by combining sensory evidence with internal predictions based on available prior knowledge. However, the neurophysiological correlates of (pre)activated prior knowledge serving these predictions are still unknown. Based on the idea that such preactivated prior knowledge must be maintained until needed, we measured the amount of maintained information in neural signals via the active information storage (AIS) measure. AIS was calculated on whole-brain beamformer-reconstructed source time courses from MEG recordings of 52 human subjects during the baseline of a Mooney face/house detection task. Preactivation of prior knowledge for faces showed as α-band-related and β-band-related AIS increases in content-specific areas; these AIS increases were behaviorally relevant in the brain's fusiform face area. Further, AIS allowed decoding of the cued category on a trial-by-trial basis. Our results support accounts indicating that activated prior knowledge and the corresponding predictions are signaled in low-frequency activity (information our eyes/retina and other sensory organs receive from the outside world, but strongly depends also on information already present in our brains, such as prior knowledge about specific situations or objects. A currently popular theory in neuroscience, predictive coding theory, suggests that this prior knowledge is used by the brain to form internal predictions about upcoming sensory information. However, neurophysiological evidence for this hypothesis is rare, mostly because this kind of evidence requires strong a priori assumptions about the specific predictions the brain makes and the brain areas involved. Using a novel, assumption-free approach, we find that face-related prior knowledge and the derived predictions are represented in low-frequency brain activity. Copyright © 2017 the authors 0270-6474/17/378273-11$15.00/0.

  14. Physicochemical processes occurring under action of ionizing radiation in sarcophagus

    International Nuclear Information System (INIS)

    Azarov, S.I.; Pshenichny, V.A.; Vilenskaya, L.N.; Korchevnaya, O.V.; Martseniuk, L.S.

    1998-01-01

    The result of analysis of environment ionization process inside Sarcophagus owing to alpha-, beta- and gamma-radiation processes with forming of ions. It is shown that as a result of ionization and physicochemical transformations gaseous mixtures, which are dangerous for personnel's health and can influence upon general technical safety of Sarcophagus, can release into atmosphere

  15. 75 FR 65707 - Notice Regarding Consideration and Processing of Applications for Financial Assistance Under the...

    Science.gov (United States)

    2010-10-26

    ... consideration and processing of applications for financial assistance under the RRIF Program. FOR FURTHER...) regarding FRA's consideration and processing of applications for financial assistance under the RRIF Program... DEPARTMENT OF TRANSPORTATION Federal Railroad Administration Notice Regarding Consideration and...

  16. Impaired Letter-String Processing in Developmental Dyslexia: What Visual-to-Phonology Code Mapping Disorder?

    Science.gov (United States)

    Valdois, Sylviane; Lassus-Sangosse, Delphine; Lobier, Muriel

    2012-01-01

    Poor parallel letter-string processing in developmental dyslexia was taken as evidence of poor visual attention (VA) span, that is, a limitation of visual attentional resources that affects multi-character processing. However, the use of letter stimuli in oral report tasks was challenged on its capacity to highlight a VA span disorder. In…

  17. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  18. Electrocatalytic reduction of carbon dioxide under plasma DBD process

    International Nuclear Information System (INIS)

    Amouroux, Jacques; Cavadias, Simeon

    2017-01-01

    Carbon dioxide can be converted, by reaction with hydrogen, into fine chemicals and liquid fuels such as methanol and DME. Methane production by the Sabatier reaction opens the way of carbon recycling for a circular economy of carbon resources. The catalytic process of methanation of carbon dioxide produces two molecules of water as a by-product. A current limitation in the CO 2 methanation is the ageing of catalysts, mainly due to water adsorption during the process. To avoid this adsorption, the process is operated at high temperature (300 °C–400 °C), leading to carbon deposition on the catalyst and its deactivation. To overcome this problem, a methanation plasma-catalytic process has been developed, which achieves high CO 2 conversion rate (80%), and a selectivity close to 100%, working from room temperature to 150 °C, instead of 300 °C–400 °C for the thermal catalytic process. The main characteristics of this process are high-voltage pulses of few nanoseconds duration, activating the adsorption of CO 2 in bent configuration and the polarization of the catalyst. The key step in this process is the desorption of water from the polarized catalyst. The high CO 2 conversion at low temperature could be explained by the creation of a plasma inside the nanopores of the catalyst. (paper)

  19. Electrocatalytic reduction of carbon dioxide under plasma DBD process

    Science.gov (United States)

    Amouroux, Jacques; Cavadias, Simeon

    2017-11-01

    Carbon dioxide can be converted, by reaction with hydrogen, into fine chemicals and liquid fuels such as methanol and DME. Methane production by the Sabatier reaction opens the way of carbon recycling for a circular economy of carbon resources. The catalytic process of methanation of carbon dioxide produces two molecules of water as a by-product. A current limitation in the CO2 methanation is the ageing of catalysts, mainly due to water adsorption during the process. To avoid this adsorption, the process is operated at high temperature (300 °C-400 °C), leading to carbon deposition on the catalyst and its deactivation. To overcome this problem, a methanation plasma-catalytic process has been developed, which achieves high CO2 conversion rate (80%), and a selectivity close to 100%, working from room temperature to 150 °C, instead of 300 °C-400 °C for the thermal catalytic process. The main characteristics of this process are high-voltage pulses of few nanoseconds duration, activating the adsorption of CO2 in bent configuration and the polarization of the catalyst. The key step in this process is the desorption of water from the polarized catalyst. The high CO2 conversion at low temperature could be explained by the creation of a plasma inside the nanopores of the catalyst.

  20. Accuracy of Single Frequency GPS Observations Processing In Near Real-time With Use of Code Predicted Products

    Science.gov (United States)

    Wielgosz, P. A.

    In this year, the system of active geodetic GPS permanent stations is going to be estab- lished in Poland. This system should provide GPS observations for a wide spectrum of users, especially it will be a great opportunity for surveyors. Many of surveyors still use cheaper, single frequency receivers. This paper focuses on processing of single frequency GPS observations only. During processing of such observations the iono- sphere plays an important role, so we concentrated on the influence of the ionosphere on the positional coordinates. Twenty consecutive days of GPS data from 2001 year were processed to analyze the accuracy of a derived three-dimensional relative vec- tor position between GPS stations. Observations from two Polish EPN/IGS stations: BOGO and JOZE were used. In addition to, a new test station - IGIK was created. In this paper, the results of single frequency GPS observations processing in near real- time are presented. Baselines of 15, 27 and 42 kilometers and sessions of 1, 2, 3, 4, and 6 hours long were processed. While processing we used CODE (Centre for Orbit De- termination in Europe, Bern, Switzerland) predicted products: orbits and ionosphere info. These products are available in real-time and enable near real-time processing. Software Bernese v. 4.2 for Linux and BPE (Bernese Processing Engine) mode were used. These results are shown with a reference to dual frequency weekly solution (the best solution). Obtained GPS positional time and GPS baseline length dependency accuracy is presented for single frequency GPS observations.

  1. Successive Transfers Relating to Movable Tangible Assets and Acquisition of Property under Article 937, Paragraph (1 of the Civil Code

    Directory of Open Access Journals (Sweden)

    Mara Ioan

    2015-12-01

    Full Text Available Apparently article 1275, paragraph (1 of the Civil Code covers all situations that may arise in practice, without making a distinction for the constituent or transferring contracts if they are of the same or of different nature. However, we appreciate that article 1275 of the Civil Code does not apply in all situations of successive transfers relating to movable tangible property granted by the same legal subject. Corroborating this text with the norms in article 937 paragraph (1 of the Civil Code and article1273 paragraph (1 of the Civil Code it leads to the solution according to which article 1275 of the Civil Code regards only the cases where the transfer of successive property are of the same nature, the onerous primary act has not resulted in immediate transmission of real previous right of the document with the free subsidiary title and when the primal act is free, and the alternative is onerous. It is excluded, thus from the application of the rule in question when the primary onerous act had as effect the immediate transmission of the real right and then, but without having occurred the delivery of the asset by the acquirer, it was concluded a document with a free title, subsidiary.

  2. Signalign: An Ontology of DNA as Signal for Comparative Gene Structure Prediction Using Information-Coding-and-Processing Techniques.

    Science.gov (United States)

    Yu, Ning; Guo, Xuan; Gu, Feng; Pan, Yi

    2016-03-01

    Conventional character-analysis-based techniques in genome analysis manifest three main shortcomings-inefficiency, inflexibility, and incompatibility. In our previous research, a general framework, called DNA As X was proposed for character-analysis-free techniques to overcome these shortcomings, where X is the intermediates, such as digit, code, signal, vector, tree, graph network, and so on. In this paper, we further implement an ontology of DNA As Signal, by designing a tool named Signalign for comparative gene structure analysis, in which DNA sequences are converted into signal series, processed by modified method of dynamic time warping and measured by signal-to-noise ratio (SNR). The ontology of DNA As Signal integrates the principles and concepts of other disciplines including information coding theory and signal processing into sequence analysis and processing. Comparing with conventional character-analysis-based methods, Signalign can not only have the equivalent or superior performance, but also enrich the tools and the knowledge library of computational biology by extending the domain from character/string to diverse areas. The evaluation results validate the success of the character-analysis-free technique for improved performances in comparative gene structure prediction.

  3. CFD modeling of combustion processes using KIVA3V Code with partially stirred reactor model for turbulence-combustion interactions

    International Nuclear Information System (INIS)

    Jarnicki, R.; Sobiesiak, A.

    2002-01-01

    In order to solve the averaged conservation equations for turbulent reacting flow one is faced with a task of specifying the averaged chemical reaction rate. This is due to turbulence influence on the mean reaction rates that appear in the species concentration Reynolds-averaged equation. In order to investigate the Partially Stirred Reactor (PaSR) combustion model capabilities, a CFD modeling using KIVA3V Code with the PaSR model of two very different combustion processes, was performed. Experimental results were compared with modeling

  4. One-dimensional thermohydraulic code THESEUS and its application to chilldown process simulation in two-phase hydrogen flows

    Science.gov (United States)

    Papadimitriou, P.; Skorek, T.

    THESUS is a thermohydraulic code for the calculation of steady state and transient processes of two-phase cryogenic flows. The physical model is based on four conservation equations with separate liquid and gas phase mass conservation equations. The thermohydraulic non-equilibrium is calculated by means of evaporation and condensation models. The mechanical non-equilibrium is modeled by a full-range drift-flux model. Also heat conduction in solid structures and heat exchange for the full spectrum of heat transfer regimes can be simulated. Test analyses of two-channel chilldown experiments and comparisons with the measured data have been performed.

  5. Modeling the UO2 ex-AUC pellet process and predicting the fuel rod temperature distribution under steady-state operating condition

    Science.gov (United States)

    Hung, Nguyen Trong; Thuan, Le Ba; Thanh, Tran Chi; Nhuan, Hoang; Khoai, Do Van; Tung, Nguyen Van; Lee, Jin-Young; Jyothi, Rajesh Kumar

    2018-06-01

    Modeling uranium dioxide pellet process from ammonium uranyl carbonate - derived uranium dioxide powder (UO2 ex-AUC powder) and predicting fuel rod temperature distribution were reported in the paper. Response surface methodology (RSM) and FRAPCON-4.0 code were used to model the process and to predict the fuel rod temperature under steady-state operating condition. Fuel rod design of AP-1000 designed by Westinghouse Electric Corporation, in these the pellet fabrication parameters are from the study, were input data for the code. The predictive data were suggested the relationship between the fabrication parameters of UO2 pellets and their temperature image in nuclear reactor.

  6. Ecological Risk Assessment Process under the Endangered Species Act

    Science.gov (United States)

    This document provides an overview of the Environmental Protection Agency’s (EPA) ecological risk assessment process for the evaluation of potential risk to endangered and threatened (listed) species from exposure to pesticides.

  7. Optimization and Control of Pressure Swing Adsorption Processes Under Uncertainty

    KAUST Repository

    Khajuria, Harish; Pistikopoulos, Efstratios N.

    2012-01-01

    The real-time periodic performance of a pressure swing adsorption (PSA) system strongly depends on the choice of key decision variables and operational considerations such as processing steps and column pressure temporal profiles, making its design

  8. Modelling of the thermomechanical and physical processes in FR fuel pins using the GERMINAL code

    International Nuclear Information System (INIS)

    Roche, L.; Pelletier, M.

    2000-01-01

    In the frame of the R and D on Fast Reactor mixed oxide fuels, CEA/DEC has developed the computer code GERMINAL for studying fuel pin thermal and mechanical behaviour, both during steady-state and incidental conditions, up to high burn-up (25 at%). The first part of this paper is devoted to the description of the main models: fuel evolution (central hole and porosity evolution, Plutonium redistribution, O/M radial profile, transient gas swelling, melting fuel behaviour, minor actinides production), high burn-up models (fission gas, volatile fission products and JOG formation), fuel-cladding heat transfer, fuel-cladding mechanical interaction. The second part gives some examples of calculation results taken from the GERMINAL validation data base (more than 40 experiments from PHENIX, PFR, CABRI reactors), with special emphasis on: local fission gas retention and global release, fuel geometry evolution, radial redistribution of plutonium for high burn-up fuels, solid and annular fuel behaviour during power ramps including fuel melting, helium formation from MA (Am and Np) doped homogeneous fuels. (author)

  9. Cerebro-cerebellar interactions underlying temporal information processing.

    Science.gov (United States)

    Aso, Kenji; Hanakawa, Takashi; Aso, Toshihiko; Fukuyama, Hidenao

    2010-12-01

    The neural basis of temporal information processing remains unclear, but it is proposed that the cerebellum plays an important role through its internal clock or feed-forward computation functions. In this study, fMRI was used to investigate the brain networks engaged in perceptual and motor aspects of subsecond temporal processing without accompanying coprocessing of spatial information. Direct comparison between perceptual and motor aspects of time processing was made with a categorical-design analysis. The right lateral cerebellum (lobule VI) was active during a time discrimination task, whereas the left cerebellar lobule VI was activated during a timed movement generation task. These findings were consistent with the idea that the cerebellum contributed to subsecond time processing in both perceptual and motor aspects. The feed-forward computational theory of the cerebellum predicted increased cerebro-cerebellar interactions during time information processing. In fact, a psychophysiological interaction analysis identified the supplementary motor and dorsal premotor areas, which had a significant functional connectivity with the right cerebellar region during a time discrimination task and with the left lateral cerebellum during a timed movement generation task. The involvement of cerebro-cerebellar interactions may provide supportive evidence that temporal information processing relies on the simulation of timing information through feed-forward computation in the cerebellum.

  10. Improving the Emergency Department's Processes of Coding and Billing at Brooke Army Medical Center

    National Research Council Canada - National Science Library

    Lehning, Peter

    2003-01-01

    .... Beginning in October 2002, outpatient itemized billing was mandated for use in the AMEDD. This system shifted the process of billing for outpatient services from an allinclusive rate to one based on the actual care provided...

  11. TRUMP-BD: A computer code for the analysis of nuclear fuel assemblies under severe accident conditions

    International Nuclear Information System (INIS)

    Lombardo, N.J.; Marseille, T.J.; White, M.D.; Lowery, P.S.

    1990-06-01

    TRUMP-BD (Boil Down) is an extension of the TRUMP (Edwards 1972) computer program for the analysis of nuclear fuel assemblies under severe accident conditions. This extension allows prediction of the heat transfer rates, metal-water oxidation rates, fission product release rates, steam generation and consumption rates, and temperature distributions for nuclear fuel assemblies under core uncovery conditions. The heat transfer processes include conduction in solid structures, convection across fluid-solid boundaries, and radiation between interacting surfaces. Metal-water reaction kinetics are modeled with empirical relationships to predict the oxidation rates of steam-exposed Zircaloy and uranium metal. The metal-water oxidation models are parabolic in form with an Arrhenius temperature dependence. Uranium oxidation begins when fuel cladding failure occurs; Zircaloy oxidation occurs continuously at temperatures above 13000 degree F when metal and steam are available. From the metal-water reactions, the hydrogen generation rate, total hydrogen release, and temporal and spatial distribution of oxide formations are computed. Consumption of steam from the oxidation reactions and the effect of hydrogen on the coolant properties is modeled for independent coolant flow channels. Fission product release from exposed uranium metal Zircaloy-clad fuel is modeled using empirical time and temperature relationships that consider the release to be subject to oxidation and volitization/diffusion (''bake-out'') release mechanisms. Release of the volatile species of iodine (I), tellurium (Te), cesium (Ce), ruthenium (Ru), strontium (Sr), zirconium (Zr), cerium (Cr), and barium (Ba) from uranium metal fuel may be modeled

  12. TRUMP-BD: A computer code for the analysis of nuclear fuel assemblies under severe accident conditions

    Energy Technology Data Exchange (ETDEWEB)

    Lombardo, N.J.; Marseille, T.J.; White, M.D.; Lowery, P.S.

    1990-06-01

    TRUMP-BD (Boil Down) is an extension of the TRUMP (Edwards 1972) computer program for the analysis of nuclear fuel assemblies under severe accident conditions. This extension allows prediction of the heat transfer rates, metal-water oxidation rates, fission product release rates, steam generation and consumption rates, and temperature distributions for nuclear fuel assemblies under core uncovery conditions. The heat transfer processes include conduction in solid structures, convection across fluid-solid boundaries, and radiation between interacting surfaces. Metal-water reaction kinetics are modeled with empirical relationships to predict the oxidation rates of steam-exposed Zircaloy and uranium metal. The metal-water oxidation models are parabolic in form with an Arrhenius temperature dependence. Uranium oxidation begins when fuel cladding failure occurs; Zircaloy oxidation occurs continuously at temperatures above 13000{degree}F when metal and steam are available. From the metal-water reactions, the hydrogen generation rate, total hydrogen release, and temporal and spatial distribution of oxide formations are computed. Consumption of steam from the oxidation reactions and the effect of hydrogen on the coolant properties is modeled for independent coolant flow channels. Fission product release from exposed uranium metal Zircaloy-clad fuel is modeled using empirical time and temperature relationships that consider the release to be subject to oxidation and volitization/diffusion ( bake-out'') release mechanisms. Release of the volatile species of iodine (I), tellurium (Te), cesium (Ce), ruthenium (Ru), strontium (Sr), zirconium (Zr), cerium (Cr), and barium (Ba) from uranium metal fuel may be modeled.

  13. Measuring the implementation of codes of conduct. An assessment method based on a process approach of the responsible organisation

    NARCIS (Netherlands)

    Nijhof, A.H.J.; Cludts, Stephan; Fisscher, O.A.M.; Laan, Albertus

    2003-01-01

    More and more organisations formulate a code of conduct in order to stimulate responsible behaviour among their members. Much time and energy is usually spent fixing the content of the code but many organisations get stuck in the challenge of implementing and maintaining the code. The code then

  14. Association of Postoperative Readmissions With Surgical Quality Using a Delphi Consensus Process to Identify Relevant Diagnosis Codes.

    Science.gov (United States)

    Mull, Hillary J; Graham, Laura A; Morris, Melanie S; Rosen, Amy K; Richman, Joshua S; Whittle, Jeffery; Burns, Edith; Wagner, Todd H; Copeland, Laurel A; Wahl, Tyler; Jones, Caroline; Hollis, Robert H; Itani, Kamal M F; Hawn, Mary T

    2018-04-18

    Postoperative readmission data are used to measure hospital performance, yet the extent to which these readmissions reflect surgical quality is unknown. To establish expert consensus on whether reasons for postoperative readmission are associated with the quality of surgery in the index admission. In a modified Delphi process, a panel of 14 experts in medical and surgical readmissions comprising physicians and nonphysicians from Veterans Affairs (VA) and private-sector institutions reviewed 30-day postoperative readmissions from fiscal years 2008 through 2014 associated with inpatient surgical procedures performed at a VA medical center between October 1, 2007, and September 30, 2014. The consensus process was conducted from January through May 2017. Reasons for readmission were grouped into categories based on International Classification of Diseases, Ninth Revision (ICD-9) diagnosis codes. Panelists were given the proportion of readmissions coded by each reason and median (interquartile range) days to readmission. They answered the question, "Does the readmission reason reflect possible surgical quality of care problems in the index admission?" on a scale of 1 (never related) to 5 (directly related) in 3 rounds of consensus building. The consensus process was completed in May 2017 and data were analyzed in June 2017. Consensus on proportion of ICD-9-coded readmission reasons that reflected quality of surgical procedure. In 3 Delphi rounds, the 14 panelists achieved consensus on 50 reasons for readmission; 12 panelists also completed group telephone calls between rounds 1 and 2. Readmissions with diagnoses of infection, sepsis, pneumonia, hemorrhage/hematoma, anemia, ostomy complications, acute renal failure, fluid/electrolyte disorders, or venous thromboembolism were considered associated with surgical quality and accounted for 25 521 of 39 664 readmissions (64% of readmissions; 7.5% of 340 858 index surgical procedures). The proportion of readmissions

  15. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  16. Media audit reveals inappropriate promotion of products under the scope of the International Code of Marketing of Breast-milk Substitutes in South-East Asia.

    Science.gov (United States)

    Vinje, Kristine Hansen; Phan, Linh Thi Hong; Nguyen, Tuan Thanh; Henjum, Sigrun; Ribe, Lovise Omoijuanfo; Mathisen, Roger

    2017-06-01

    To review regulations and to perform a media audit of promotion of products under the scope of the International Code of Marketing of Breast-milk Substitutes ('the Code') in South-East Asia. We reviewed national regulations relating to the Code and 800 clips of editorial content, 387 advertisements and 217 Facebook posts from January 2015 to January 2016. We explored the ecological association between regulations and market size, and between the number of advertisements and market size and growth of milk formula. Cambodia, Indonesia, Myanmar, Thailand and Vietnam. Regulations on the child's age for inappropriate marketing of products are all below the Code's updated recommendation of 36 months (i.e. 12 months in Thailand and Indonesia; 24 months in the other three countries) and are voluntary in Thailand. Although the advertisements complied with the national regulations on the age limit, they had content (e.g. stages of milk formula; messages about the benefit; pictures of a child) that confused audiences. Market size and growth of milk formula were positively associated with the number of newborns and the number of advertisements, and were not affected by the current level of implementation of breast-milk substitute laws and regulations. The present media audit reveals inappropriate promotion and insufficient national regulation of products under the scope of the Code in South-East Asia. Strengthened implementation of regulations aligned with the Code's updated recommendation should be part of comprehensive strategies to minimize the harmful effects of advertisements of breast-milk substitutes on maternal and child nutrition and health.

  17. MCNP-DSP, Monte Carlo Neutron-Particle Transport Code with Digital Signal Processing

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Description of program or function: MCNP-DSP is recommended only for experienced MCNP users working with subcritical measurements. It is a modification of the Los Alamos National Laboratory's Monte Carlo code MCNP4a that is used to simulate a variety of subcritical measurements. The DSP version was developed to simulate frequency analysis measurements, correlation (Rossi-) measurements, pulsed neutron measurements, Feynman variance measurements, and multiplicity measurements. CCC-700/MCNP4C is recommended for general purpose calculations. 2 - Methods:MCNP-DSP performs calculations very similarly to MCNP and uses the same generalized geometry capabilities of MCNP. MCNP-DSP can only be used with the continuous-energy cross-section data. A variety of source and detector options are available. However, unlike standard MCNP, the source and detector options are limited to those described in the manual because these options are specified in the MCNP-DSP extra data file. MCNP-DSP is used to obtain the time-dependent response of detectors that are modeled in the simulation geometry. The detectors represent actual detectors used in measurements. These time-dependent detector responses are used to compute a variety of quantities such as frequency analysis signatures, correlation signatures, multiplicity signatures, etc., between detectors or sources and detectors. Energy ranges are 0-60 MeV for neutrons (data generally only available up to 20 MeV) and 1 keV - 1 GeV for photons and electrons. 3 - Restrictions on the complexity of the problem: None noted

  18. Endocrine processes underlying victory and defeat in the male rat

    NARCIS (Netherlands)

    Schuurman, Teunis

    1981-01-01

    The central questions of the present study were:1. does base line hormonal state determine agonistic behavior in male-male encounters? 2. does agonistic behavior affect hormonal state? Such an interrelationship between agonistic behavior and hormonal processes might serve as a regulatory system for

  19. Minimization of water consumption under uncertainty for PC process

    Energy Technology Data Exchange (ETDEWEB)

    Salazar, J.; Diwekar, U.; Zitney, S.

    2009-01-01

    Integrated gasification combined cycle (IGCC) technology is becoming increasingly important for the development of advanced power generation systems. As an emerging technology different process configurations have been heuristically proposed for IGCC processes. One of these schemes combines water-gas shift reaction and chemical-looping combustion for the CO2 removal prior the fuel gas is fed to the gas turbine reducing its size (improving economic performance) and producing sequestration-ready CO2 (improving its cleanness potential). However, these schemes have not been energetically integrated and process synthesis techniques can be used to obtain optimal flowsheets and designs. This work studies the heat exchange network synthesis (HENS) for the water-gas shift reaction train employing a set of alternative designs provided by Aspen energy analyzer (AEA) and combined in a process superstructure that was simulated in Aspen Plus (AP). For the alternative designs, large differences in the performance parameters (for instance, the utility requirements) predictions from AEA and AP were observed, suggesting the necessity of solving the HENS problem within the AP simulation environment and avoiding the AEA simplifications. A CAPE-OPEN compliant capability which makes use of a MINLP algorithm for sequential modular simulators was employed to obtain a heat exchange network that provided a cost of energy that was 27% lower than the base case.

  20. Processing of the GALILEO fuel rod code model uncertainties within the AREVA LWR realistic thermal-mechanical analysis methodology

    International Nuclear Information System (INIS)

    Mailhe, P.; Barbier, B.; Garnier, C.; Landskron, H.; Sedlacek, R.; Arimescu, I.; Smith, M.; Bellanger, P.

    2013-01-01

    The availability of reliable tools and associated methodology able to accurately predict the LWR fuel behavior in all conditions is of great importance for safe and economic fuel usage. For that purpose, AREVA has developed its new global fuel rod performance code GALILEO along with its associated realistic thermal-mechanical analysis methodology. This realistic methodology is based on a Monte Carlo type random sampling of all relevant input variables. After having outlined the AREVA realistic methodology, this paper will be focused on the GALILEO code benchmarking process, on its extended experimental database and on the GALILEO model uncertainties assessment. The propagation of these model uncertainties through the AREVA realistic methodology is also presented. This GALILEO model uncertainties processing is of the utmost importance for accurate fuel design margin evaluation as illustrated on some application examples. With the submittal of Topical Report GALILEO to the U.S. NRC in 2013, GALILEO and its methodology are on the way to be industrially used in a wide range of irradiation conditions. (authors)

  1. NSURE code

    International Nuclear Information System (INIS)

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  2. A Dual Coding Model of Processing Chinese as a Second Language: A Cognitive-Load Approach

    Science.gov (United States)

    Sham, Diana Po Lan

    2002-01-01

    The research was conducted in Sydney and Hong Kong using students, from grades 5 to 9, whose first language or teaching medium was English, learning to read Chinese as second language. According to cognitive load theory, the processing of single Chinese characters accompanied by pictures should impose extraneous cognitive load and thus hinders…

  3. 78 FR 73502 - Multistakeholder Process To Develop Consumer Data Privacy Code of Conduct Concerning Facial...

    Science.gov (United States)

    2013-12-06

    ... Blueprint'').\\1\\ The Privacy Blueprint directs NTIA to convene multistakeholder processes to develop legally... services for mobile devices handle personal data.\\3\\ On December 3, 2013, NTIA announced that the goal of... Privacy Bill of Rights. \\1\\ The Privacy Blueprint is available at http://www.whitehouse.gov/sites/default...

  4. Evidence for embodied predictive coding: the anterior insula coordinates cortical processing of tactile deviancy

    DEFF Research Database (Denmark)

    Allen, Micah; Fardo, Francesca; Dietz, Martin

    2015-01-01

    this possibility in the somatosensory domain, we measured brain activity using functional magnetic resonance imaging while healthy participants discriminated tactile stimuli in a roving oddball design. Dynamic Causal Modelling revealed that unexpected stimuli increased the strength of forward connections...... processing of tactile changes to support body awareness....

  5. Using operating experience and a cause coding tree to identify improvements for nuclear process control

    International Nuclear Information System (INIS)

    Paradies, M.W.; Busch, D.A.

    1987-01-01

    This paper outlines a systematic approach to identification of the root causes of incidents and the use of that information to improve nuclear plant process control. The paper describes how the system was developed, how the system is used, and how the system has been accepted

  6. Current state of the auto-evaluation process of the behaviour code in the safety of research reactors in Mexico

    International Nuclear Information System (INIS)

    Mamani A, Y. R.; Salgado G, J. R.

    2011-11-01

    In Mexico, the regulator organism in nuclear matter is the National Commission of Nuclear Safety and Safeguards, and a nuclear research reactor exists, the TRIGA Mark III, operated by the National Institute of Nuclear Research. In this work the main aspects of the current state and the future challenges are presented with relationship to the installation of the auto-evaluation process of the behaviour code in the safety of research reactors for the TRIGA reactor case. Additionally, the legal mark of the licensing process for the nuclear activities in a research reactor is described in a brief way, and the main characteristics of the reactor, the uses for the isotopes production, the administration and the verification of the safety, the administration program of the radiological protection, the emergency plan and the operation personnel qualification are emphasized. (Author)

  7. Characterizing the monaural and binaural processes underlying reflection masking

    DEFF Research Database (Denmark)

    Buchholz, Jörg

    2007-01-01

    for the two RMTs, it is shown that forward masking effects only have a significant effect on reflection masking for delays above 7–10 ms. Moreover, binaural mechanisms were revealed which deteriorate auditory detection of test reflections for delays below 7–10 ms and enhance detection for larger delays....... The monaural and binaural processes that may underlie reflection masking are discussed in terms of auditory-modelling concepts....

  8. Category Specific Spatial Dissociations of Parallel Processes Underlying Visual Naming

    OpenAIRE

    Conner, Christopher R.; Chen, Gang; Pieters, Thomas A.; Tandon, Nitin

    2013-01-01

    The constituent elements and dynamics of the networks responsible for word production are a central issue to understanding human language. Of particular interest is their dependency on lexical category, particularly the possible segregation of nouns and verbs into separate processing streams. We applied a novel mixed-effects, multilevel analysis to electrocorticographic data collected from 19 patients (1942 electrodes) to examine the activity of broadly disseminated cortical networks during t...

  9. Neural Correlates of Feedback Processing in Decision Making under Risk

    Directory of Open Access Journals (Sweden)

    Beate eSchuermann

    2012-07-01

    Full Text Available Introduction. Event-related brain potentials (ERP provide important information about the sensitivity of the brain to process varying risks. The aim of the present study was to determine how different risk levels are reflected in decision-related ERPs, namely the feedback-related negativity (FRN and the P300. Material and Methods. 20 participants conducted a probabilistic two-choice gambling task while an electroencephalogram was recorded. Choices were provided between a low-risk option yielding low rewards and low losses and a high-risk option yielding high rewards and high losses. While options differed in expected risks, they were equal in expected values and in feedback probabilities. Results. At the behavioral level, participants were generally risk-averse but modulated their risk-taking behavior according to reward history. An early positivity (P200 was enhanced on negative feedbacks in high-risk compared to low-risk options. With regard to the FRN, there were significant amplitude differences between positive and negative feedbacks in high-risk options, but not in low-risk options. While the FRN on negative feedbacks did not vary with decision riskiness, reduced amplitudes were found for positive feedbacks in high-risk relative to low-risk choices. P300 amplitudes were larger in high-risk decisions, and in an additive way, after negative compared to positive feedback. Discussion. The present study revealed significant influences of risk and valence processing on ERPs. FRN findings suggest that the reward prediction error signal is increased after high-risk decisions. The increased P200 on negative feedback in risky decisions suggests that large negative prediction errors are processed as early as in the P200 time range. The later P300 amplitude is sensitive to feedback valence as well as to the risk of a decision. Thus, the P300 carries additional information for reward processing, mainly the enhanced motivational significance of risky

  10. A model for optimization of process integration investments under uncertainty

    International Nuclear Information System (INIS)

    Svensson, Elin; Stroemberg, Ann-Brith; Patriksson, Michael

    2011-01-01

    The long-term economic outcome of energy-related industrial investment projects is difficult to evaluate because of uncertain energy market conditions. In this article, a general, multistage, stochastic programming model for the optimization of investments in process integration and industrial energy technologies is proposed. The problem is formulated as a mixed-binary linear programming model where uncertainties are modelled using a scenario-based approach. The objective is to maximize the expected net present value of the investments which enables heat savings and decreased energy imports or increased energy exports at an industrial plant. The proposed modelling approach enables a long-term planning of industrial, energy-related investments through the simultaneous optimization of immediate and later decisions. The stochastic programming approach is also suitable for modelling what is possibly complex process integration constraints. The general model formulation presented here is a suitable basis for more specialized case studies dealing with optimization of investments in energy efficiency. -- Highlights: → Stochastic programming approach to long-term planning of process integration investments. → Extensive mathematical model formulation. → Multi-stage investment decisions and scenario-based modelling of uncertain energy prices. → Results illustrate how investments made now affect later investment and operation opportunities. → Approach for evaluation of robustness with respect to variations in probability distribution.

  11. Category specific spatial dissociations of parallel processes underlying visual naming.

    Science.gov (United States)

    Conner, Christopher R; Chen, Gang; Pieters, Thomas A; Tandon, Nitin

    2014-10-01

    The constituent elements and dynamics of the networks responsible for word production are a central issue to understanding human language. Of particular interest is their dependency on lexical category, particularly the possible segregation of nouns and verbs into separate processing streams. We applied a novel mixed-effects, multilevel analysis to electrocorticographic data collected from 19 patients (1942 electrodes) to examine the activity of broadly disseminated cortical networks during the retrieval of distinct lexical categories. This approach was designed to overcome the issues of sparse sampling and individual variability inherent to invasive electrophysiology. Both noun and verb generation evoked overlapping, yet distinct nonhierarchical processes favoring ventral and dorsal visual streams, respectively. Notable differences in activity patterns were noted in Broca's area and superior lateral temporo-occipital regions (verb > noun) and in parahippocampal and fusiform cortices (noun > verb). Comparisons with functional magnetic resonance imaging (fMRI) results yielded a strong correlation of blood oxygen level-dependent signal and gamma power and an independent estimate of group size needed for fMRI studies of cognition. Our findings imply parallel, lexical category-specific processes and reconcile discrepancies between lesional and functional imaging studies. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Acoustic wave focusing in complex media using Nonlinear Time Reversal coded signal processing

    Czech Academy of Sciences Publication Activity Database

    Dos Santos, S.; Dvořáková, Zuzana; Lints, M.; Kůs, V.; Salupere, A.; Převorovský, Zdeněk

    2014-01-01

    Roč. 19, č. 12 (2014) ISSN 1435-4934. [European Conference on Non-Destructive Testing (ECNDT 2014) /11./. Praha, 06.10.2014-10.10.2014] Institutional support: RVO:61388998 Keywords : ultrasonic testing (UT) * signal processing * TR- NEWS * nonlinear time reversal * NDT * nonlinear acoustics Subject RIV: BI - Acoustics http://www.ndt.net/events/ECNDT2014/app/content/Slides/590_DosSantos_Rev1.pdf

  13. Levels of processing and the coding of position cues in motor short-term memory.

    Science.gov (United States)

    Ho, L; Shea, J B

    1978-06-01

    The present study investigated the appropriateness of the levels-of-processing framework of memory for explaining retention of information in motor short-term memory. Subjects were given labels descriptive of the positions to be remembered by the experimenter (EL), were given no labels (NL), or provided their own labels (SL). A control group (CONT) was required to count backwards during the presentation of the criterion positions. The inclusion of a 30-sec filled retention interval as well as 0-sec and 30-sec unfilled retention intervals tested a prediction by Craik and Lockhart (1972), when attention is diverted from an item, information will be lost at a rate appropriate to its level of processing - that is, slower rates for deeper levels. Groups EL and SL had greater accuracy at recall for all three retention intervals than groups CONT and NL. In addition, there was no significant increase in error between 30-sec unfilled and 30-sec filled intervals for groups EL and SL, while there was a significant increase in error for groups CONT and NL. The data were interpreted in terms of Craik and Lockhart's (1972) levels-of-processing approach to memory.

  14. Dissociable neural processes underlying risky decisions for self versus other

    Directory of Open Access Journals (Sweden)

    Daehyun eJung

    2013-03-01

    Full Text Available Previous neuroimaging studies on decision making have mainly focused on decisions on behalf of oneself. Considering that people often make decisions on behalf of others, it is intriguing that there is little neurobiological evidence on how decisions for others differ from those for self. Thus, the present study focused on the direct comparison between risky decisions for self and those for other using functional magnetic resonance imaging (fMRI. Participants (N = 23 were asked to perform a gambling task for themselves (decision-for-self condition or for another person (decision-for-other condition while in the scanner. Their task was to choose between a low-risk option (i.e., win or lose 10 points and a high-risk option (i.e., win or lose 90 points. The winning probabilities of each option varied from 17% to 83%. Compared to choices for others, choices for self were more risk-averse at lower winning probability and more risk-seeking at higher winning probability, perhaps due to stronger affective process during risky decision for self compared to other. The brain activation pattern changed according to the target of the decision, such that reward-related regions were more active in the decision-for-self condition than in the decision-for-other condition, whereas brain regions related to the theory of mind (ToM showed greater activation in the decision-for-other condition than in the decision-for-self condition. A parametric modulation analysis reflecting each individual’s decision model revealed that activation of the amygdala and the dorsomedial prefrontal cortex (DMPFC were associated with value computation for self and for other, respectively, during a risky financial decision. The present study suggests that decisions for self and other may recruit fundamentally distinctive neural processes, which can be mainly characterized by dominant affective/impulsive and cognitive/regulatory processes, respectively.

  15. Dissociable Neural Processes Underlying Risky Decisions for Self Versus Other

    Science.gov (United States)

    Jung, Daehyun; Sul, Sunhae; Kim, Hackjin

    2013-01-01

    Previous neuroimaging studies on decision making have mainly focused on decisions on behalf of oneself. Considering that people often make decisions on behalf of others, it is intriguing that there is little neurobiological evidence on how decisions for others differ from those for oneself. The present study directly compared risky decisions for self with those for another person using functional magnetic resonance imaging (fMRI). Participants were asked to perform a gambling task on behalf of themselves (decision-for-self condition) or another person (decision-for-other condition) while in the scanner. Their task was to choose between a low-risk option (i.e., win or lose 10 points) and a high-risk option (i.e., win or lose 90 points) with variable levels of winning probability. Compared with choices regarding others, those regarding oneself were more risk-averse at lower winning probabilities and more risk-seeking at higher winning probabilities, perhaps due to stronger affective process during risky decisions for oneself compared with those for other. The brain-activation pattern changed according to the target, such that reward-related regions were more active in the decision-for-self condition than in the decision-for-other condition, whereas brain regions related to the theory of mind (ToM) showed greater activation in the decision-for-other condition than in the decision-for-self condition. Parametric modulation analysis using individual decision models revealed that activation of the amygdala and the dorsomedial prefrontal cortex (DMPFC) were associated with value computations for oneself and for another, respectively, during risky financial decisions. The results of the present study suggest that decisions for oneself and for other may recruit fundamentally distinct neural processes, which can be mainly characterized as dominant affective/impulsive and cognitive/regulatory processes, respectively. PMID:23519016

  16. The GEM code. A simulation program for the evaporation and the fission process of an excited nucleus

    Energy Technology Data Exchange (ETDEWEB)

    Furihata, Shiori [Mitsubishi Research Institute Inc., Tokyo (Japan); Niita, Koji [Research Organization for Information Science and Technology, Tokai, Ibaraki (Japan); Meigo, Shin-ichiro; Ikeda, Yujiro; Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The GEM code is a simulation program which describes the de-excitation process of an excited nucleus, which is based on the Generalized Evaporation Model and the Atchison fission model. It has been shown that the combination of the Bertini intranuclear cascade model and GEM accurately predicts the cross sections of light fragments, such as Be produced from the proton-induced reactions. It has also been shown that the use of the reevaluated parameters in the Atchison model improves predictions of cross sections of fission fragments produced from the proton-induced reaction on Au. In this report, we present details and the usage of the GEM code. Furthermore, the results of benchmark calculations are shown by using the combination of the Bertini intranuclear cascade model and the GEM code (INC/GEM). Neutron spectra and isotope production cross sections from the reactions on various targets irradiated by protons are calculated with INC/GEM. Those results are compared with experimental data as well as the calculation results with LAHET. INC/GEM reproduces the experiments of double differential neutron emissions from the reaction on Al and Pb. The isotopic distributions for He, Li, and Be produced from the reaction on Ag are in good agreement with experimental data within 50%, although INC/GEM underestimates those of heavier nuclei than O. It is also shown that the predictions with INC/GEM for isotope production of light fragments, such as Li and Be, are better than those calculation with LAHET, particularly for heavy target. INC/GEM also gives better estimates of the cross sections of fission products than LAHET. (author)

  17. Introductory Biology Textbooks Under-Represent Scientific Process

    Directory of Open Access Journals (Sweden)

    Dara B. Duncan

    2011-08-01

    Full Text Available Attrition of undergraduates from Biology majors is a long-standing problem. Introductory courses that fail to engage students or spark their curiosity by emphasizing the open-ended and creative nature of biological investigation and discovery could contribute to student detachment from the field. Our hypothesis was that introductory biology books devote relatively few figures to illustration of the design and interpretation of experiments or field studies, thereby de-emphasizing the scientific process.To investigate this possibility, we examined figures in six Introductory Biology textbooks published in 2008. On average, multistep scientific investigations were presented in fewer than 5% of the hundreds of figures in each book. Devoting such a small percentage of figures to the processes by which discoveries are made discourages an emphasis on scientific thinking. We suggest that by increasing significantly the illustration of scientific investigations, textbooks could support undergraduates’ early interest in biology, stimulate the development of design and analytical skills, and inspire some students to participate in investigations of their own.

  18. BUSH: A computer code for calculating steady state heat transfer in LWR rod bundles under accident conditions

    International Nuclear Information System (INIS)

    Shepherd, I.M.

    1982-01-01

    The computer code BUSH has been developed for the calculation of steady state heat transfer in a rod bundle. For a given power, flow and geometry it can calculate the temperatures in the rods, coolant and shroud assuming that at any axial level each rod can be described by one temperature and the coolant fluid is also radially uniform at this level. Heat transfer by convection and radiation are handled and the geometry is flexible enough to model nearly all types of envisaged shroud design for the SUPERSARA test series. The modular way in which BUSH has been written makes it suitable for future development, either within the present BUSH framework or as part of a more advanced code

  19. Development of an advanced PFM code for the integrity evaluation of nuclear piping system under combined aging mechanisms

    International Nuclear Information System (INIS)

    Datta, Debashis

    2010-02-01

    A nuclear piping system is composed of several straight pipes and elbows joined by welding. These weld sections are usually the most susceptible failure parts susceptible to various degradation mechanisms. Whereas a specific location of a reactor piping system might fail by a combination of different aging mechanisms, e.g. fatigue and/or stress corrosion cracking, the majority of the piping probabilistic fracture mechanics (PFM) codes can only consider a single aging mechanism at a time. So, a probabilistic fracture mechanics computer code capable of considering multiple aging mechanisms was developed for an accurate failure analysis of each specific component of a nuclear piping section. The newly proposed crack morphology based probabilistic leak flow rate module is introduced in this code to separately treat fatigue and SCC type cracks. Improved models e.g. stressors models, elbow failure model, SIFs model, local seismic occurrence probability model, performance based crack detection models, etc., are also included in this code. Recent probabilistic fatigue (S-N) and SCC crack initiation (S-T) and subsequent crack growth rate models are coded. An integrated probabilistic risk assessment and probabilistic fracture mechanics methodology is proposed. A complete flow chart regarding the combined aging mechanism model is presented. The combined aging mechanism based module can significantly reduce simulation efforts and time. Two NUREG benchmark problems, e.g. reactor pressure vessel outlet nozzle section and a surge line elbow located just below the pressurizer are reinvestigated by this code. The results showed that, contribution of pre-existing cracks in addition to initiating cracks, can significantly increase the overall failure probability. Inconel weld location of reactor pressure vessel outlet nozzle section showed the weakest point in terms of relative through-wall leak failure probability in the order of about 10 -2 at the 40-year plant life. Considering

  20. Contractual Penalty and the Right to Payment for Delays Caused by Force Majeure in Czech Civil Law under the New Civil Code

    OpenAIRE

    Janku Martin

    2015-01-01

    In the context of the conclusion of contracts between entrepreneurs under the Czech Civil Code, it is a relatively common arrangement that the parties disclaim any and all liability for damage arising from non-compliance with contractual obligations, if they can prove that this failure was due to an obstacle independent of their will. This circumstance excluding liability for the damage is called force majeure by the theory. In many countries this circumstance is ruled upon directly by the le...

  1. Method for improving the gamma-transition cascade spectra amplitude resolution during coincidence code computerized processing

    International Nuclear Information System (INIS)

    Sukhovoj, A.M.; Khitrov, V.A.

    1984-01-01

    A method of unfolding the differential γ-cascade spectra during radiation capture of slow neutrons based on the computeri-- zed processing of the results of measurements performed, by means of a spectrometer with two Ge(Li) detectors is suggested. The efficiency of the method is illustrated using as an example the spectrum of 35 Cl(n, γ) reaction corresponding to the 8580 keV peak. It is shown that the above approach permits to improve the resolution by 1.2-2.6 times without decrease in registration efficiency within the framework of the method of coincidence pulse amplitude summation

  2. Evaluation of compliance with the Spanish Code of self-regulation of food and drinks advertising directed at children under the age of 12 years in Spain, 2012.

    Science.gov (United States)

    León-Flández, K; Rico-Gómez, A; Moya-Geromin, M Á; Romero-Fernández, M; Bosqued-Estefania, M J; Damián, J; López-Jurado, L; Royo-Bordonada, M Á

    2017-09-01

    To evaluate compliance levels with the Spanish Code of self-regulation of food and drinks advertising directed at children under the age of 12 years (Publicidad, Actividad, Obesidad, Salud [PAOS] Code) in 2012; and compare these against the figures for 2008. Cross-sectional study. Television advertisements of food and drinks (AFD) were recorded over 7 days in 2012 (8am-midnight) of five Spanish channels popular to children. AFD were classified as core (nutrient-rich/low-calorie products), non-core (nutrient-poor/rich-calorie products) or miscellaneous. Compliance with each standard of the PAOS Code was evaluated. AFD were deemed to be fully compliant when it met all the standards. Two thousand five hundred and eighty-two AFDs came within the purview of the PAOS Code. Some of the standards that registered the highest levels of non-compliance were those regulating the suitability of the information presented (79.4%) and those prohibiting the use of characters popular with children (25%). Overall non-compliance with the Code was greater in 2012 than in 2008 (88.3% vs 49.3%). Non-compliance was highest for advertisements screened on children's/youth channels (92.3% vs. 81.5%; P < 0.001) and for those aired outside the enhanced protection time slot (89.3% vs. 86%; P = 0.015). Non-compliance with the PAOS Code is higher than for 2008. Given the lack of effectiveness of self-regulation, a statutory system should be adopted to ban AFD directed at minors, or at least restrict it to healthy products. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  3. BIOCHEMICAL PROCESSES IN CHERNOZEM SOIL UNDER DIFFERENT FERTILIZATION SYSTEMS

    Directory of Open Access Journals (Sweden)

    Ecaterina Emnova

    2012-06-01

    Full Text Available The paper deals with the evaluation of the intensity of certain soil biochemical processes (e.g. soil organic C mineralization at Organic and mixed Mineral+Organic fertilization of typical chernozem in crop rotation dynamics (for 6 years by use of eco-physiological indicators of biological soil quality: microbial biomass carbon, basal soil respiration, as well as, microbial and metabolic quotients. Soil sampling was performed from a long-term field crop experiment, which has been established in 1971 at the Balti steppe (Northern Moldova. The crop types had a more considerable impact on the soil microbial biomass accumulation and community biochemical activity compared to long-term Organic or mixed Mineral + Organic fertilizers amendments. The Org fertilization system doesn’t make it possible to avoid the loss of organic C in arable typical chernozem. The organic fertilizer (cattle manure is able to mitigate the negative consequences of long-term mineral fertilization.

  4. Flux behaviour under different operational conditions in osmosis process

    DEFF Research Database (Denmark)

    Korenak, Jasmina; Zarebska, Agata; Buksek, Hermina

    the active membrane layer is facing draw solution. Osmosis process can be affected by several factors, such as operating conditions (temperature and cross flow velocity), feed and draw solution properties, and membrane characteristics. These factors can significantly contribute to the efficiency......, and total dissolved solids. Taken together our results can contribute understanding of the how performance of asymmetric FO membranes can be enhanced by feed and draw properties, membrane characteristics and operational conditions.......The transport of water molecules across a semi-permeable membrane is driven by the osmotic pressure difference between feed and draw solution. Two different operational modes can be distinguished, namely FO mode when the active membrane layer is facing the wastewater (feed), and PRO mode when...

  5. Dress codes and appearance policies: challenges under federal legislation, part 3: Title VII, the Americans with Disabilities Act, and the National Labor Relations Act.

    Science.gov (United States)

    Mitchell, Michael S; Koen, Clifford M; Darden, Stephen M

    2014-01-01

    As more and more individuals express themselves with tattoos and body piercings and push the envelope on what is deemed appropriate in the workplace, employers have an increased need for creation and enforcement of reasonable dress codes and appearance policies. As with any employment policy or practice, an appearance policy must be implemented and enforced without regard to an individual's race, color, sex, national origin, religion, disability, age, or any other protected status. A policy governing dress and appearance based on the business needs of an employer that is applied fairly and consistently and does not have a disproportionate effect on any protected class will generally be upheld if challenged in court. By examining some of the more common legal challenges to dress codes and how courts have resolved the disputes, health care managers can avoid many potential problems. This article, the third part of a 3-part examination of dress codes and appearance policies, focuses on the issues of race and national origin under the Civil Rights Act, disability under the Americans With Disabilities Act, and employees' rights to engage in concerted activities under the National Labor Relations Act. Pertinent court cases that provide guidance for employers are addressed.

  6. Modelling soil carbon fate under erosion process in vineyard

    Science.gov (United States)

    Novara, Agata; Scalenghe, Riccardo; Minacapilli, Mario; Maltese, Antonino; Capodici, Fulvio; Borgogno Mondino, Enrico; Gristina, Luciano

    2017-04-01

    Soil erosion processes in vineyards beyond water runoff and sediment transport have a strong effect on soil organic carbon loss (SOC) and redistribution along the slope. The variation of SOC across the landscape determines a difference in soil fertility and vine productivity. The aim of this research was to study erosion of a Mediterranean vineyard, develop an approach to estimate the SOC loss, correlate the vines vigor with sediment and carbon erosion. The study was carried out in a Sicilian (Italy) vineyard, planted in 2011. Along the slope, six pedons were studied by digging 6 pits up to 60cm depth. Soil was sampled in each pedon every 10cm and SOC was analyzed. Soil erosion, detachment and deposition areas were measured by pole height method. The vigor of vegetation was expressed in term of NDVI (Normalized difference Vegetation Index) derived from a satellite image (RapidEye) acquired at berry pre-veraison stage (July) and characterized by 5 spectral bands in the shortwave region, including a band in the red wavelength (R, 630-685 nm) and in the near infrared (NIR, 760-850 nm) . Results showed that soil erosion, sediments redistribution and SOC across the hill was strongly affected by topographic features, slope and curvature. The erosion rate was 46Mg ha-1 y-1 during the first 6 years since planting. The SOC redistribution was strongly correlated with the detachment or deposition area as highlighted by pole height measurements. The approach developed to estimate the SOC loss showed that during the whole study period the off-farm SOC amounts to 1.6Mg C ha-1. As highlighted by NDVI results, the plant vigor is strong correlated with SOC content and therefore, developing an accurate NDVI approach could be useful to detect the vineyard areas characterized by low fertility due to erosion process.

  7. The synaptic pharmacology underlying sensory processing in the superior colliculus.

    Science.gov (United States)

    Binns, K E

    1999-10-01

    The superior colliculus (SC) is one of the most ancient regions of the vertebrate central sensory system. In this hub afferents from several sensory pathways converge, and an extensive range of neural circuits enable primary sensory processing, multi-sensory integration and the generation of motor commands for orientation behaviours. The SC has a laminar structure and is usually considered in two parts; the superficial visual layers and the deep multi-modal/motor layers. Neurones in the superficial layers integrate visual information from the retina, cortex and other sources, while the deep layers draw together data from many cortical and sub-cortical sensory areas, including the superficial layers, to generate motor commands. Functional studies in anaesthetized subjects and in slice preparations have used pharmacological tools to probe some of the SC's interacting circuits. The studies reviewed here reveal important roles for ionotropic glutamate receptors in the mediation of sensory inputs to the SC and in transmission between the superficial and deep layers. N-methyl-D-aspartate receptors appear to have special responsibility for the temporal matching of retinal and cortical activity in the superficial layers and for the integration of multiple sensory data-streams in the deep layers. Sensory responses are shaped by intrinsic inhibitory mechanisms mediated by GABA(A) and GABA(B) receptors and influenced by nicotinic acetylcholine receptors. These sensory and motor-command activities of SC neurones are modulated by levels of arousal through extrinsic connections containing GABA, serotonin and other transmitters. It is possible to naturally stimulate many of the SC's sensory and non-sensory inputs either independently or simultaneously and this brain area is an ideal location in which to study: (a) interactions between inputs from the same sensory system; (b) the integration of inputs from several sensory systems; and (c) the influence of non-sensory systems on

  8. Proposal to consistently apply the International Code of Nomenclature of Prokaryotes (ICNP) to names of the oxygenic photosynthetic bacteria (cyanobacteria), including those validly published under the International Code of Botanical Nomenclature (ICBN)/International Code of Nomenclature for algae, fungi and plants (ICN), and proposal to change Principle 2 of the ICNP.

    Science.gov (United States)

    Pinevich, Alexander V

    2015-03-01

    This taxonomic note was motivated by the recent proposal [Oren & Garrity (2014) Int J Syst Evol Microbiol 64, 309-310] to exclude the oxygenic photosynthetic bacteria (cyanobacteria) from the wording of General Consideration 5 of the International Code of Nomenclature of Prokaryotes (ICNP), which entails unilateral coverage of these prokaryotes by the International Code of Nomenclature for algae, fungi, and plants (ICN; formerly the International Code of Botanical Nomenclature, ICBN). On the basis of key viewpoints, approaches and rules in the systematics, taxonomy and nomenclature of prokaryotes it is reciprocally proposed to apply the ICNP to names of cyanobacteria including those validly published under the ICBN/ICN. For this purpose, a change to Principle 2 of the ICNP is proposed to enable validation of cyanobacterial names published under the ICBN/ICN rules. © 2015 IUMS.

  9. Differential cognitive processing of Kanji and Kana words: do orthographic and semantic codes function in parallel in word matching task.

    Science.gov (United States)

    Kawakami, A; Hatta, T; Kogure, T

    2001-12-01

    Relative engagements of the orthographic and semantic codes in Kanji and Hiragana word recognition were investigated. In Exp. 1, subjects judged whether the pairs of Kanji words (prime and target) presented sequentially were physically identical to each other in the word condition. In the sentence condition, subjects decided whether the target word was valid for the prime sentence presented in advance. The results showed that the response times to the target swords orthographically similar (to the prime) were significantly slower than to semantically related target words in the word condition and that this was also the case in the sentence condition. In Exp. 2, subjects judged whether the target word written in Hiragana was physically identical to the prime word in the word condition. In the sentence condition, subjects decided if the target word was valid for the previously presented prime sentence. Analysis indicated that response times to orthographically similar words were slower than to semantically related words in the word condition but not in the sentence condition wherein the response times to the semantically and orthographically similar words were largely the same. Based on these results, differential contributions of orthographic and semantic codes in cognitive processing of Japanese Kanji and Hiragana words was discussed.

  10. Foveal Processing Under Concurrent Peripheral Load in Profoundly Deaf Adults

    Science.gov (United States)

    2016-01-01

    Development of the visual system typically proceeds in concert with the development of audition. One result is that the visual system of profoundly deaf individuals differs from that of those with typical auditory systems. While past research has suggested deaf people have enhanced attention in the visual periphery, it is still unclear whether or not this enhancement entails deficits in central vision. Profoundly deaf and typically hearing adults were administered a variant of the useful field of view task that independently assessed performance on concurrent central and peripheral tasks. Identification of a foveated target was impaired by a concurrent selective peripheral attention task, more so in profoundly deaf adults than in the typically hearing. Previous findings of enhanced performance on the peripheral task were not replicated. These data are discussed in terms of flexible allocation of spatial attention targeted towards perceived task demands, and support a modified “division of labor” hypothesis whereby attentional resources co-opted to process peripheral space result in reduced resources in the central visual field. PMID:26657078

  11. A regional process under the international initiative for IFM

    Directory of Open Access Journals (Sweden)

    Murase Masahiko

    2016-01-01

    Full Text Available Climate change is likely to result in increases in the frequency or intensity of extreme weather events including floods. The International Flood Initiative (IFI, initiated in January 2005 by UNESCO and WMO and voluntary partner organizations has promoted an integrated flood management (IFM to take advantage of floods and use of floodplains while reducing the social, environmental and economic risks. Its secretariat is located in ICHARM. The initiative objective is to support national platforms to practice evidence-based disaster risk reduction through mobilizing scientific and research networks. After its initial decade, the initiative is providing a stepping-stone for the implementation of Sendai Framework by revitalizing its activities aimed at building on the sucess of the past, while addressing existing gaps in integrated flood managemnet strategies comprising of optimal structural and nonstructural measures thereby mainstreaming disaster risk reduction and targeting sustainable development. In this context, a new mechanism try to facilitate monitoring, assessment and capacity building in the Asia Pacific region. The primary outcomes of the mechanism are demand-driven networking and related documentations of best practices for 1 hazard assessment, 2 exposure assessment, 3 vulnerability assessment and coping capacity to identify the gaps, and 4 follow-ups and monitoring of the IFM process.

  12. Individual TL detector characteristics in automated processing of personnel dosemeters: correction factors as extension to identity codes of dosemeter cards

    International Nuclear Information System (INIS)

    Toivonen, Matti.

    1979-07-01

    One, two and three-component dosemeter cards and their associated processing equipment were developed for personnel monitoring. A novel feature of the TLD system is that the individual sensitivity correction factors of TL detectors for β/γ radiation dosimetry and special timing factors for the readout of neutron detectors are stored on dosemeter cards as an extension of the identity codes. These data are utilized in the automatic TL reading process with the aim of cancelling out the influence of the individual detector characteristics on the measuring results. Stimulation of TL is done with hot nitrogen without removing the detectors from their cards and without any metal contact. Changes in detector characteristics are thus improbable. The reading process can be adjusted in a variety of ways. For example, each detector in the same card can be processed with optimal heating and the specific 250 deg C glow peak of neutron radiation can be roughly separated from the main LiF glow peaks. (author)

  13. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  14. 22 CFR 92.92 - Service of legal process under provisions of State law.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Service of legal process under provisions of... AND RELATED SERVICES Quasi-Legal Services § 92.92 Service of legal process under provisions of State law. It may be found that a State statue purporting to regulate the service of process in foreign...

  15. Computer code for the analysis of destructive pressure generation process during a fuel failure accident, PULSE-2

    International Nuclear Information System (INIS)

    Fujishiro, Toshio

    1978-03-01

    The computer code PULSE-2 has been developed for the analysis of pressure pulse generation process when hot fuel particles come into contact with the coolant in a fuel rod failure accident. In the program, it is assumed that hot fuel fragments mix with the coolant instantly and homogeneously in the failure region. Then, the rapid vaporization of the coolant and transient pressure rise in failure region, and the movement of ejected coolant slugs are calculated. The effect of a fuel-particle size distribution is taken into consideration. Heat conduction in the fuel particles and heat transfer at fuel-coolant interface are calculated. Temperature, pressure and void fraction in the mixed region are calculated from the average enthalpy. With physical property subroutines for liquid sodium and water, the model is usable for both LMFBR and LWR conditions. (auth.)

  16. The banana code – Natural blend processing in the olfactory circuitry of Drosophila melanogaster

    Directory of Open Access Journals (Sweden)

    Marco eSchubert

    2014-02-01

    Full Text Available Odor information is predominantly perceived as complex odor blends. For Drosophila melanogaster one of the most attractive blends is emitted by an over-ripe banana. To analyze how the fly’s olfactory system processes natural blends we combined the experimental advantages of gas chromatography and functional imaging (GC-I. In this way, natural banana compounds were presented successively to the fly antenna in close to natural occurring concentrations. This technique allowed us to identify the active odor components, use these compounds as stimuli and measure odor-induced Ca2+ signals in input and output neurons of the Drosophila antennal lobe (AL, the first olfactory neuropil. We demonstrate that mixture interactions of a natural blend are very rare and occur only at the AL output level resulting in a surprisingly linear blend representation. However, the information regarding single components is strongly modulated by the olfactory circuitry within the AL leading to a higher similarity between the representation of individual components and the banana blend. This observed modulation might tune the olfactory system in a way to distinctively categorize odor components and improve the detection of suitable food sources. Functional GC-I thus enables analysis of virtually any unknown natural odorant blend and its components in their relative occurring concentrations and allows characterization of neuronal responses of complete neural assemblies. This technique can be seen as a valuable complementary method to classical GC/electrophysiology techniques, and will be a highly useful tool in future investigations of insect-insect and insect-plant chemical interactions.

  17. Validation of the RELAP5 code for the modeling of flashing-induced instabilities under natural-circulation conditions using experimental data from the CIRCUS test facility

    Energy Technology Data Exchange (ETDEWEB)

    Kozmenkov, Y. [Helmholtz-Zentrum Dresden-Rossendorf e.V. (FZD), Institute of Safety Research, P.O.B. 510119, D-01324 Dresden (Germany); Institute of Physics and Power Engineering, Obninsk (Russian Federation); Rohde, U., E-mail: U.Rohde@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf e.V. (FZD), Institute of Safety Research, P.O.B. 510119, D-01324 Dresden (Germany); Manera, A. [Paul Scherrer Institute (Switzerland)

    2012-02-15

    Highlights: Black-Right-Pointing-Pointer We report about the simulation of flashing-induced instabilities in natural circulation systems. Black-Right-Pointing-Pointer Flashing-induced instabilities are of relevance for operation of pool-type reactors of small power at low pressure. Black-Right-Pointing-Pointer The RELAP5 code is validated against measurement data from natural circulation experiments. Black-Right-Pointing-Pointer The magnitude and frequency of the oscillations were reproduced in good agreement with the measurement data. - Abstract: This paper reports on the use of the RELAP5 code for the simulation of flashing-induced instabilities in natural circulation systems. The RELAP 5 code is intended to be used for the simulation of transient processes in the Russian RUTA reactor concept operating at atmospheric pressure with forced convection of coolant. However, during transient processes, natural circulation with flashing-induced instabilities might occur. The RELAP5 code is validated against measurement data from natural circulation experiments performed within the framework of a European project (NACUSP) on the CIRCUS facility. The facility, built at the Delft University of Technology in The Netherlands, is a water/steam 1:1 height-scaled loop of a typical natural-circulation-cooled BWR. It was shown that the RELAP5 code is able to model all relevant phenomena related to flashing induced instabilities. The magnitude and frequency of the oscillations were reproduced in a good agreement with the measurement data. The close correspondence to the experiments was reached by detailed modeling of all components of the CIRCUS facility including the heat exchanger, the buffer vessel and the steam dome at the top of the facility.

  18. Dress codes and appearance policies: challenges under federal legislation, part 1: title VII of the civil rights act and religion.

    Science.gov (United States)

    Mitchell, Michael S; Koen, Clifford M; Moore, Thomas W

    2013-01-01

    As more and more individuals choose to express themselves and their religious beliefs with headwear, jewelry, dress, tattoos, and body piercings and push the envelope on what is deemed appropriate in the workplace, employers have an increased need for creation and enforcement of reasonable dress codes and appearance policies. As with any employment policy or practice, an appearance policy must be implemented and enforced without regard to an individual's race, color, sex, national origin, religion, disability, age, or any other protected status. A policy governing dress and appearance based on the business needs of an employer that is applied fairly and consistently and does not have a disproportionate effect on any protected class will generally be upheld if challenged in court. By examining some of the more common legal challenges to dress codes and how courts have resolved the disputes, health care managers can avoid many potential problems. This article addresses the issue of religious discrimination focusing on dress and appearance and some of the court cases that provide guidance for employers.

  19. Designing the User Interface COBRET under Windows to Carry out Pre- and Post-Processing for the Programs COBRA-RERTR and PARET

    International Nuclear Information System (INIS)

    Ghazi, N.; Monther, A.; Hainoun, A.

    2004-01-01

    In the frame work of testing, evaluation and application of computer codes in the design and safety analysis of research reactors, the dynamic code PARET and the thermal hydraulic code COBRA-RERTR have been adopted. In order to run the codes under windows and to support the user by pre- and post processing, the user interface program COBRET has been developed in the programming language Visual Basic 6 and the data used by it are organized and stored in a relational database in MS Access, an integral part of the software package, MS Office. The interface works in the environment of the Windows operating system and utilizes its graphics as well as other possibilities. It consists of Pre and Post processor. The pre processor deals with the interactive preparation of the input files for PARET and COBRA codes. It supports the user with an automatic check in routine for detecting logical input errors in addition to many direct helps during the multi mode input process. This process includes an automatic branching according to the selected control parameters that depends on the simulation modes of the considered physical problem. The post processor supports the user with graphical tool to present the time and axial distribution of the system variables that consist of many neutronics and thermal hydraulic parameters of the reactor system like neutron flux, reactivity, temperatures, flow rate, pressure and void distribution. (authors)

  20. Designing the user interface COBRET under windows to carry out pre- and post-processing for the programs COBRA-RERTR and PARET

    International Nuclear Information System (INIS)

    Hainoun, A.; Monther, A.; Ghazi, N.

    2004-01-01

    In this framework of testing, evaluation and application of computer codes in the design studies and safety analysis of research reactors, the dynamic code PARET and the thermal hydraulic code COBRA-RERTR have been adopted. In order to run the codes under windows and to support the user by pre- and post processing. The user interface program COBRET has been developed in the programming language visual basic 6 and the data used by it are organized and stored in a relational database in MS Access, and integral port of the software package, MS Office. The interface works in the environment of the Windows operating system and utilizes its graphics as well as other possibilities. It consists of Pre and Post processor. The pre processor deals with the interactive preparation of the input files for PARET and COBRA codes. it supports the user with an automatic check in routine for detecting logical input errors in addition to many direct helps during the multi mode input process. This process includes an automatic branching according to the selected control parameters that depends on the simulation modes of the considered physical problem. The post processor supports the user with graphical tool to present the time and axial distribution of the system variables that consist of many neutronics and thermal hydraulic parameters o the reactor system like neutron flux, reactivity, temperatures, flow rate, pressure and void distribution (author)

  1. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  2. Implementing a bar-code assisted medication administration system: effects on the dispensing process and user perceptions.

    Science.gov (United States)

    Samaranayake, N R; Cheung, S T D; Cheng, K; Lai, K; Chui, W C M; Cheung, B M Y

    2014-06-01

    We assessed the effects of a bar-code assisted medication administration system used without the support of computerised prescribing (stand-alone BCMA), on the dispensing process and its users. The stand-alone BCMA system was implemented in one ward of a teaching hospital. The number of dispensing steps, dispensing time and potential dispensing errors (PDEs) were directly observed one month before and eight months after the intervention. Attitudes of pharmacy and nursing staff were assessed using a questionnaire (Likert scale) and interviews. Among 1291 and 471 drug items observed before and after the introduction of the technology respectively, the number of dispensing steps increased from five to eight and time (standard deviation) to dispense one drug item by one staff personnel increased from 0.8 (0.09) to 1.5 (0.12) min. Among 2828 and 471 drug items observed before and after the intervention respectively, the number of PDEs increased significantly (Psystem offered less benefit to the dispensing process (9/16). Nursing staff perceived the system as useful in improving the accuracy of drug administration (7/10). Implementing a stand-alone BCMA system may slow down and complicate the dispensing process. Nursing staff believe the stand-alone BCMA system could improve the drug administration process but pharmacy staff believes the technology would be more helpful if supported by computerised prescribing. However, periodical assessments are needed to identify weaknesses in the process after implementation, and all users should be educated on the benefits of using this technology. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Coding for optical channels

    CERN Document Server

    Djordjevic, Ivan; Vasic, Bane

    2010-01-01

    This unique book provides a coherent and comprehensive introduction to the fundamentals of optical communications, signal processing and coding for optical channels. It is the first to integrate the fundamentals of coding theory and optical communication.

  4. Performance evaluation based on data from code reviews

    OpenAIRE

    Andrej, Sekáč

    2016-01-01

    Context. Modern code review tools such as Gerrit have made available great amounts of code review data from different open source projects as well as other commercial projects. Code reviews are used to keep the quality of produced source code under control but the stored data could also be used for evaluation of the software development process. Objectives. This thesis uses machine learning methods for an approximation of review expert’s performance evaluation function. Due to limitations in ...

  5. The Obligation And Warranty To State Reasons Of Judicial Decisions Under The Paradigm Of The New Code Of Civil Procedure: A Right Of Democratic State Of Consolidation

    Directory of Open Access Journals (Sweden)

    Quezia Dornellas Fialho

    2017-02-01

    Full Text Available The constitutional state requires den to fundamental rights within the process. The duty, while the guarantee, the evidence of judicial decisions should, together with other procedural principles, lead to a new way of thinking about the process, not aiming at the speed at any cost, but the safe conduct of the fundamental rights of parties during the procedural motion. Thus, with respect to this duty-assurance, the new Code of Civil Procedure innovated by establishing requirements for the goals adequate reasoning of judgments.

  6. Proposals to clarify and enhance the naming of fungi under the International Code of Nomenclature for algae, fungi, and plants.

    Science.gov (United States)

    Hawksworth, David L

    2015-06-01

    Twenty-three proposals to modify the International Code of Nomenclature for algae, fungi, and plants adopted in 2011 with respect to the provisions for fungi are made, in accordance with the wishes of mycologists expressed at the 10(th) International Mycological Congress in Bangkok in 2014, and with the support of the International Commission on the Taxonomy of Fungi (ICTF), the votes of which are presented here. The proposals relate to: conditions for epitypification, registration of later typifications, protected lists of names, removal of exemptions for lichen-forming fungi, provision of a diagnosis when describing a new taxon, citation of sanctioned names, avoiding homonyms in other kingdoms, ending preference for sexually typified names, and treatment of conspecific names with the same epithet. These proposals are also being published in Taxon, will be considered by the Nomenclature Committee for Fungi and General Committee on Nomenclature, and voted on at the 19(th) International Botanical Congress in Shenzhen, China, in 2017.

  7. Beyond dual-process models: A categorisation of processes underlying intuitive judgement and decision making

    NARCIS (Netherlands)

    Glöckner, A.; Witteman, C.L.M.

    2010-01-01

    Intuitive-automatic processes are crucial for making judgements and decisions. The fascinating complexity of these processes has attracted many decision researchers, prompting them to start investigating intuition empirically and to develop numerous models. Dual-process models assume a clear

  8. Design and simulation of material-integrated distributed sensor processing with a code-based agent platform and mobile multi-agent systems.

    Science.gov (United States)

    Bosse, Stefan

    2015-02-16

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  9. Design and Simulation of Material-Integrated Distributed Sensor Processing with a Code-Based Agent Platform and Mobile Multi-Agent Systems

    Directory of Open Access Journals (Sweden)

    Stefan Bosse

    2015-02-01

    Full Text Available Multi-agent systems (MAS can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  10. Computer Simulation of Cure Process of an Axisymmetric Rubber Article Reinforced by Metal Plates Using Extended ABAQUS Code

    Directory of Open Access Journals (Sweden)

    M.H.R. Ghoreishy

    2013-01-01

    Full Text Available Afinite element model is developed for simulation of the curing process of a thick axisymmetric rubber article reinforced by metal plates during the molding and cooling stages. The model consists of the heat transfer equation and a newly developed kinetics model for the determination of the state of cure in the rubber. The latter is based on the modification of the well-known Kamal-Sourour model. The thermal contact of the rubber with metallic surfaces (inserts and molds and the variation of the thermal properties (conductivity and specific heat with temperature and state-of-cure are taken into consideration. The ABAQUS code is used in conjunction with an in-house developed user subroutine to solve the governing equations. Having compared temperature profile and variation of the state-of-cure with experimentally measured data, the accuracy and applicability of the model is confirmed. It is also shown that this model can be successfully used for the optimization of curing process which gives rise to reduction of the molding time.

  11. Code Modernization of VPIC

    Science.gov (United States)

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  12. Fission gas release behavior of MOX fuels under simulated daily-load-follow operation condition. IFA-554/555 test evaluation with FASTGRASS code

    International Nuclear Information System (INIS)

    Ikusawa, Yoshihisa; Ozawa, Takayuki

    2008-03-01

    IFA-554/555 load-follow tests were performed in HALDEN reactor (HBWR) to study the MOX fuel behavior under the daily-load-follow operation condition in the framework of ATR-MOX fuel development in JAEA. IFA-554/555 rig had the instruments of rod inner pressure, fuel center temperature, fuel stack elongation, and cladding elongation. Although the daily-load-follow operation in nuclear power plant is one of the available options for economical improvement, the power change in a short period in this operation causes the change of thermal and mechanical irradiation conditions. In this report, FP gas release behavior of MOX fuel rod was evaluated under the daily-load-follow operation condition with the examination data from IFA-554/555 by using the computation code 'FASTGRASS'. From the computation results of FASTGRASS code which could compute the FP gas release behavior under the transient condition, it could be concluded that FP gas was released due to the relaxation of fuel pellet inner stress and pellet temperature increase, which were caused by the cyclic power change during the daily-load-follow operation. In addition, since the amount of released FP gas decreased during the steady operation after the daily-load-follow, it could be mentioned that the total of FP gas release at the end of life with the daily-load-follow is not so much different from that without the daily-load-follow. (author)

  13. Summary report for ITER task - D10: Update and implementation of neutron transport and activation codes and processed libraries

    International Nuclear Information System (INIS)

    Attaya, H.

    1995-01-01

    The primary goal of this task is to provide the capabilities in the activation code RACC, to treat pulsed operation modes. In addition, it is required that the code utilizes the same spatial mesh and geometrical models as employed in the one or multidimensional neutron transport codes used in ITER design. This would ensure the use of the same neutron flux generated by those codes to calculate the different activation parameters. It is also required to have the capabilities for generating graphical outputs for the calculated activation parameters

  14. Catalogue of nuclear fusion codes - 1976

    International Nuclear Information System (INIS)

    1976-10-01

    A catalogue is presented of the computer codes in nuclear fusion research developed by JAERI, Division of Thermonuclear Fusion Research and Division of Large Tokamak Development in particular. It contains a total of about 100 codes under the categories: Atomic Process, Data Handling, Experimental Data Processing, Engineering, Input and Output, Special Languages and Their Application, Mathematical Programming, Miscellaneous, Numerical Analysis, Nuclear Physics, Plasma Physics and Fusion Research, Plasma Simulation and Numerical Technique, Reactor Design, Solid State Physics, Statistics, and System Program. (auth.)

  15. THE APPLICATION PROCESS OF HAMBURG RULES, GIVEN THE CONTEXT OF THE EMERGENCE AND ENTRY INTO FORCE OF THE NEW ROMANIAN CIVIL CODE

    Directory of Open Access Journals (Sweden)

    Adriana Elena Belu

    2013-11-01

    Full Text Available The paper aims to conduct a comparative analysis and tries to offer an objective point of view regarding a number of questions arisen in practice, related to the applicability of the 1978 Hamburg Rules and keeping public order of Romanian private international law, such as those that aim at: agreeing upon the applicability of the foreign law by the Romanian parties; applicability of the Hamburg Rules; public nuisance of the Romanian private international law; character of public policy rule of the Hamburg Rules. In the application process of the Hamburg Rules, given the context of the emergence and entry into force of the New Civil Code, obviously, the provisions of the Romanian Civil Code shall apply in addition, where the international convention lacks. Therefore, in order to apply the logic of the provisions of the Civil Code in full compliance with the international standards, though giving priority to the latter rules, a rigorous analysis is required, analysis which becomes more complex given the fact that, in accordance with Art. 230 of Law no. 71/2011 to implement Law no. 287/2009 on the Civil Code, Book II "About Maritime Trade and Sailing" of the Commercial Code, will be abolished upon the entry into force of the Maritime Code, as those provisions remain in force, being applied with priority to the rules of the Civil Code.

  16. Mashing of Rice with Barley Malt Under Nonconventional Process Conditions for Use in Food Processes

    DEFF Research Database (Denmark)

    Moe, T.; Adler-Nissen, Jens

    1994-01-01

    Non-conventional mashing conditions are relevant in the development of a lactic acid-fermented soymilk beverage where mashed rice is the source of carbohydrates for the fermentation and sweetness of the beverage. Advantages in the process layout could be achieved by mashing at higher pH and lower...... conditions when a mashing step is integrated in other food processes....

  17. Trends in Data Centre Energy Consumption under the European Code of Conduct for Data Centre Energy Efficiency

    Directory of Open Access Journals (Sweden)

    Maria Avgerinou

    2017-09-01

    Full Text Available Climate change is recognised as one of the key challenges humankind is facing. The Information and Communication Technology (ICT sector including data centres generates up to 2% of the global CO2 emissions, a number on par to the aviation sector contribution, and data centres are estimated to have the fastest growing carbon footprint from across the whole ICT sector, mainly due to technological advances such as the cloud computing and the rapid growth of the use of Internet services. There are no recent estimations of the total energy consumption of the European data centre and of their energy efficiency. The aim of this paper is to evaluate, analyse and present the current trends in energy consumption and efficiency in data centres in the European Union using the data submitted by companies participating in the European Code of Conduct for Data Centre Energy Efficiency programme, a voluntary initiative created in 2008 in response to the increasing energy consumption in data centres and the need to reduce the related environmental, economic and energy supply security impacts. The analysis shows that the average Power Usage Effectiveness (PUE of the facilities participating in the programme is declining year after year. This confirms that voluntary approaches could be effective in addressing climate and energy issue.

  18. Development of a dynamical model of a nuclear processes simulator for analysis and training in classroom based in the RELAP/SCDAP codes

    International Nuclear Information System (INIS)

    Salazar C, J.H.; Ramos P, J.C.; Salazar S, E.; Chavez M, C.

    2003-01-01

    The present work illustrates the application of the concept of a simulator for analysis, design, instruction and training in a classroom environment associated to a nuclear power station. Emphasis is made on the methodology used to incorporate the best estimate codes RELAP/SCDAP to a prototype under development at the Nuclear Reactor Engineering Analysis Laboratory (NREAL). This methodology is based on a modular structure where multiple processes can be executed in an independent way and where the generated information is stored in shared memory segments and distributed by means of communication routines developed in the C programming language. The utility of the system is demonstrated using highly interactive graphics (mimic diagrams, pictorials and tendency graphs) for the simultaneous dynamic visualization of the most significant variables of a typical transient event (feed water controller failure in a BWR). A fundamental part of the system is its advanced graphic interface. This interface, of the type of direct manipulation, reproduces instruments and controls whose functionality is similar to those found in the current replica simulator for the Laguna Verde Nuclear Power Station. Finally the evaluation process is described. The general behavior of the main variables for the selected transitory event is interpreted, corroborating that they follow the same tendency that those reported for a BWR. The obtained results allow to conclude that the developed system works satisfactorily and that the use of al 1 x 1 real time visualization tools offers important advantages regarding other traditional methods of analysis. (Author)

  19. Differentiation of ileostomy from colostomy procedures: assessing the accuracy of current procedural terminology codes and the utility of natural language processing.

    Science.gov (United States)

    Vo, Elaine; Davila, Jessica A; Hou, Jason; Hodge, Krystle; Li, Linda T; Suliburk, James W; Kao, Lillian S; Berger, David H; Liang, Mike K

    2013-08-01

    Large databases provide a wealth of information for researchers, but identifying patient cohorts often relies on the use of current procedural terminology (CPT) codes. In particular, studies of stoma surgery have been limited by the accuracy of CPT codes in identifying and differentiating ileostomy procedures from colostomy procedures. It is important to make this distinction because the prevalence of complications associated with stoma formation and reversal differ dramatically between types of stoma. Natural language processing (NLP) is a process that allows text-based searching. The Automated Retrieval Console is an NLP-based software that allows investigators to design and perform NLP-assisted document classification. In this study, we evaluated the role of CPT codes and NLP in differentiating ileostomy from colostomy procedures. Using CPT codes, we conducted a retrospective study that identified all patients undergoing a stoma-related procedure at a single institution between January 2005 and December 2011. All operative reports during this time were reviewed manually to abstract the following variables: formation or reversal and ileostomy or colostomy. Sensitivity and specificity for validation of the CPT codes against the mastery surgery schedule were calculated. Operative reports were evaluated by use of NLP to differentiate ileostomy- from colostomy-related procedures. Sensitivity and specificity for identifying patients with ileostomy or colostomy procedures were calculated for CPT codes and NLP for the entire cohort. CPT codes performed well in identifying stoma procedures (sensitivity 87.4%, specificity 97.5%). A total of 664 stoma procedures were identified by CPT codes between 2005 and 2011. The CPT codes were adequate in identifying stoma formation (sensitivity 97.7%, specificity 72.4%) and stoma reversal (sensitivity 74.1%, specificity 98.7%), but they were inadequate in identifying ileostomy (sensitivity 35.0%, specificity 88.1%) and colostomy (75

  20. Synthesizing Certified Code

    OpenAIRE

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach for formally demonstrating software quality. Its basic idea is to require code producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates that can be checked independently. Since code certification uses the same underlying technology as program verification, it requires detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding annotations to th...

  1. System for measuring the effect of fouling and corrosion on heat transfer under simulated OTEC conditions. [HTAU and LABTTF codes

    Energy Technology Data Exchange (ETDEWEB)

    Fetkovich, J.G.

    1976-12-01

    A complete system designed to measure, with high precision, changes in heat transfer rates due to fouling and corrosion of simulated heat exchanger tubes, at sea and under OTEC conditions is described. All aspects of the system are described in detail, including theory, mechanical design, electronics design, assembly procedures, test and calibration, operating procedures, laboratory results, field results, and data analysis programs.

  2. MO-G-BRE-05: Clinical Process Improvement and Billing in Radiation Oncology: A Case Study of Applying FMEA for CPT Code 77336 (continuing Medical Physics Consultation)

    International Nuclear Information System (INIS)

    Spirydovich, S; Huq, M

    2014-01-01

    Purpose: The improvement of quality in healthcare can be assessed by Failure Mode and Effects Analysis (FMEA). In radiation oncology, FMEA, as applied to the billing CPT code 77336, can improve both charge capture and, most importantly, quality of the performed services. Methods: We created an FMEA table for the process performed under CPT code 77336. For a given process step, each member of the assembled team (physicist, dosimetrist, and therapist) independently assigned numerical values for: probability of occurrence (O, 1–10), severity (S, 1–10), and probability of detection (D, 1–10) for every failure mode cause and effect combination. The risk priority number, RPN, was then calculated as a product of O, S and D from which an average RPN was calculated for each combination mentioned above. A fault tree diagram, with each process sorted into 6 categories, was created with linked RPN. For processes with high RPN recommended actions were assigned. 2 separate R and V systems (Lantis and EMR-based ARIA) were considered. Results: We identified 9 potential failure modes and corresponding 19 potential causes of these failure modes all resulting in unjustified 77336 charge and compromised quality of care. In Lantis, the range of RPN was 24.5–110.8, and of S values – 2–10. The highest ranking RPN of 110.8 came from the failure mode described as “end-of-treatment check not done before the completion of treatment”, and the highest S value of 10 (RPN=105) from “overrides not checked”. For the same failure modes, within ARIA electronic environment with its additional controls, RPN values were significantly lower (44.3 for end-of-treatment missing check and 20.0 for overrides not checked). Conclusion: Our work has shown that when charge capture was missed that also resulted in some services not being performed. Absence of such necessary services may result in sub-optimal quality of care rendered to patients

  3. Development of severe accident analysis code - A study on the molten core-concrete interaction under severe accidents

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Chang Hyun; Lee, Byung Chul; Huh, Chang Wook; Kim, Doh Young; Kim, Ju Yeul [Seoul National University, Seoul (Korea, Republic of)

    1996-07-01

    The purpose of this study is to understand the phenomena of the molten core/concrete interaction during the hypothetical severe accident, and to develop the model for heat transfer and physical phenomena in MCCIs. The contents of this study are analysis of mechanism in MCCIs and assessment of heat transfer models, evaluation of model in CORCON code and verification in CORCON using SWISS and SURC Experiments, and 1000 MWe PWR reactor cavity coolability, and establishment a model for prediction of the crust formation and temperature of melt-pool. The properties and flow condition of melt pool covering with the conditions of severe accident are used to evaluate the heat transfer coefficients in each reviewed model. Also, the scope and limitation of each model for application is assessed. A phenomenological analysis is performed with MELCOR 1.8.2 and MELCOR 1.8.3 And its results is compared with corresponding experimental reports of SWISS and SURC experiments. And the calculation is performed to assess the 1000 MWe PWR reactor cavity coolability. To improve the heat transfer model between melt-pool and overlying coolant and analyze the phase change of melt-pool, 2 dimensional governing equations are established using the enthalpy method and computational program is accomplished in this study. The benchmarking calculation is performed and its results are compared to the experiment which has not considered effects of the coolant boiling and the gas injection. Ultimately, the model shall be developed for considering the gas injection effect and coolant boiling effect. 66 refs., 10 tabs., 29 refs. (author)

  4. Physical models and codes for prediction of activity release from defective fuel rods under operation conditions and in leakage tests during refuelling

    International Nuclear Information System (INIS)

    Likhanskii, V.; Evdokimov, I.; Khoruzhii, O.; Sorokin, A.; Novikov, V.

    2003-01-01

    It is appropriate to use the dependences, based on physical models, in the design-analytical codes for improving of reliability of defective fuel rod detection and for determination of defect characteristics by activity measuring in the primary coolant. In the paper the results on development of some physical models and integral mechanistic codes, assigned for prediction of defective fuel rod behaviour are presented. The analysis of mass transfer and mass exchange between fuel rod and coolant showed that the rates of these processes depends on many factors, such as coolant turbulent flow, pressure, effective hydraulic diameter of defect, fuel rod geometric parameters. The models, which describe these dependences, have been created. The models of thermomechanical fuel behaviour, stable gaseous FP release were modified and new computer code RTOP-CA was created thereupon for description of defective fuel rod behaviour and activity release into the primary coolant. The model of fuel oxidation in in-pile conditions, which includes radiolysis and RTOP-LT after validation of physical models are planned to be used for prediction of defective fuel rods behaviour

  5. The Narrative-Emotion Process Coding System 2.0: A multi-methodological approach to identifying and assessing narrative-emotion process markers in psychotherapy.

    Science.gov (United States)

    Angus, Lynne E; Boritz, Tali; Bryntwick, Emily; Carpenter, Naomi; Macaulay, Christianne; Khattra, Jasmine

    2017-05-01

    Recent studies suggest that it is not simply the expression of emotion or emotional arousal in session that is important, but rather it is the reflective processing of emergent, adaptive emotions, arising in the context of personal storytelling and/or Emotion-Focused Therapy (EFT) interventions, that is associated with change. To enhance narrative-emotion integration specifically in EFT, Angus and Greenberg originally identified a set of eight clinically derived narrative-emotion integration markers were originally identified for the implementation of process-guiding therapeutic responses. Further evaluation and testing by the Angus Narrative-Emotion Marker Lab resulted in the identification of 10 empirically validated Narrative-Emotion Process (N-EP) markers that are included in the Narrative-Emotion Process Coding System Version 2.0 (NEPCS 2.0). Based on empirical research findings, individual markers are clustered into Problem (e.g., stuckness in repetitive story patterns, over-controlled or dysregulated emotion, lack of reflectivity), Transition (e.g., reflective, access to adaptive emotions and new emotional plotlines, heightened narrative and emotion integration), and Change (e.g., new story outcomes and self-narrative discovery, and co-construction and re-conceptualization) subgroups. To date, research using the NEPCS 2.0 has investigated the proportion and pattern of narrative-emotion markers in Emotion-Focused, Client-Centered, and Cognitive Therapy for Major Depression, Motivational Interviewing plus Cognitive Behavioral Therapy for Generalized Anxiety Disorder, and EFT for Complex Trauma. Results have consistently identified significantly higher proportions of N-EP Transition and Change markers, and productive shifts, in mid- and late phase sessions, for clients who achieved recovery by treatment termination. Recovery is consistently associated with client storytelling that is emotionally engaged, reflective, and evidencing new story outcomes and self

  6. Opening up codings?

    DEFF Research Database (Denmark)

    Steensig, Jakob; Heinemann, Trine

    2015-01-01

    doing formal coding and when doing more “traditional” conversation analysis research based on collections. We are more wary, however, of the implication that coding-based research is the end result of a process that starts with qualitative investigations and ends with categories that can be coded...

  7. Study of probing beam enlargement due to forward-scattering under low wavenumber turbulence using a FDTD full-wave code

    Energy Technology Data Exchange (ETDEWEB)

    Silva, F. da [Associao EURATOM/IST, IPFN-LA, Instituto Superor Tecnico, Lisbon (Portugal); Heuraux, S. [Institut Jean Lamour, CNRS-Nancy-Universite, BP70239, Vandoeuvre-les-Nancy (France); Gusakov, E.; Popov, A. [Ioffe Institute, Polytekhnicheskaya, St Petersburg (Russian Federation)

    2011-07-01

    Forward-scattering under high level of turbulence or long propagation paths can induce significant effects, as predicted by theory, and impose a signature on the Doppler reflectometry response. Simulations using a FDTD (finite-difference time-domain) full-wave code have confirmed the main dependencies and general behavior described by theory but display a returned RMS power, at moderate amplitudes, half of the one predicted by theory due to the impossibility to reach the numerical requirements needed to describe the small wavenumber spectrum with the wanted accuracy.One justifying factor may be due to the splitting and enlargement of the probing beam. At high turbulence levels, the scattered power returning to the antenna is higher than the predicted by the theory probably due to the scattered zone being closer than the oblique cutoff. This loss of coherence of the wavefront induces a beam spreading, which is also responsible for a diminution of the wavenumber resolution. With a FDTD full-wave code we study the behavior of the probing beam under several amplitude levels of low wavenumber plasma turbulence, using long temporal simulations series to ensure statistical accuracy. (authors)

  8. Analysis of fuel pin behavior under slow-ramp type transient overpower condition by using the fuel performance evaluation code 'FEMAXI-FBR'

    International Nuclear Information System (INIS)

    Tsuboi, Yasushi; Ninokata, Hisashi; Endo, Hiroshi; Ishizu, Tomoko; Tatewaki, Isao; Saito, Hiroaki

    2012-01-01

    FEMAXI-FBR has been developed as the one module of the core disruptive accident analysis code 'ASTERIA-FBR' in order to evaluate the mixed oxide (MOX) fuel performance under steady, transient and accident conditions of fast reactors consistently. On the basis of light water reactor (LWR) fuel performance evaluation code 'FEMAXI-6', FEMAXI-FBR develops specific models for the fast reactor fuel performance, such as restructuring, material migration during steady state and transient, melting cavity formation and pressure during accident, so that it can evaluate the fuel failure during accident. The analysis of test pin with slow transient over power test of CABRI-2 program was conducted from steady to transient. The test pin was pre-irradiated and tested under transient overpower with several % P 0 /s (P 0 : steady state power) of the power rate. Analysis results of the gas release ratio, pin failure time, and fuel melt radius were compared to measured values. The analysis results of the steady and transient performances were also compared with the measured values. The compared performances are gas release ratio, fuel restructuring for steady state and linear power and melt radius at failure during transient. This analysis result reproduces the measured value. It was concluded that FEMAXI-FBR is effective to evaluate fast reactor fuel performances from steady state to accident conditions. (author)

  9. EXTRA·M: a computing code system for analysis of the Purex process with mixer settlers for reprocessing

    International Nuclear Information System (INIS)

    Tachimori, Shoichi

    1994-03-01

    A computer code system EXTRA·M, for simulation of transient behavior of the solutes in a multistage countercurrent extraction process, was developed aiming to predict the distribution and chemical behaviors of actinide elements, i.e., U, Pu, Np, and of technetium in the Purex process of fuel reprocessing. The mathematical model is applicable to a complete mixing stagewise contactor such as mixer settler and to the Purex, with tri-n-butylphosphate (TBP) and nitric acid system. The main characteristics of the EXTRA·M are as follows; i) Calculation of distribution ratios of the solutes is based on numerical equations of which parameter values are to be determined by a best fit method with a number of experimental data. ii) Total of 18 solutes; U(IV), U(VI), Pu(III), Pu(IV), Pu(V), Pu(VI), Np(IV), Np(V), Np(VI), Tc(IV), Tc(V), Tc(VI), Tc(VII), Zr(IV), HNO 3 , hydrazine, hydroxylamine nitrate and nitrous acid, are treated and rate equations of total 40 chemical reactions involving these solutes are incorporated. iii) Instantaneous change of flow conditions, i.e., concentration of the solutes and flow rate of the feeding solutions, is contrived by computation. iv) Reflux or bypass mode calculation, in which an aqueous raffinate stream is transferred to the preceding bank or stage, is possible. The present report explains the concept, assumptions and characteristics of the model, the material balance equations including distribution and reaction rate equations and their solution method, and the usefulness of the model by showing some examples of the verification results. A description and source program of EXTRA·M1, as an example, are listed in the annex. (J.P.N.) 63 refs

  10. Implementation and use of Gaussian process meta model for sensitivity analysis of numerical models: application to a hydrogeological transport computer code

    International Nuclear Information System (INIS)

    Marrel, A.

    2008-01-01

    In the studies of environmental transfer and risk assessment, numerical models are used to simulate, understand and predict the transfer of pollutant. These computer codes can depend on a high number of uncertain input parameters (geophysical variables, chemical parameters, etc.) and can be often too computer time expensive. To conduct uncertainty propagation studies and to measure the importance of each input on the response variability, the computer code has to be approximated by a meta model which is build on an acceptable number of simulations of the code and requires a negligible calculation time. We focused our research work on the use of Gaussian process meta model to make the sensitivity analysis of the code. We proposed a methodology with estimation and input selection procedures in order to build the meta model in the case of a high number of inputs and with few simulations available. Then, we compared two approaches to compute the sensitivity indices with the meta model and proposed an algorithm to build prediction intervals for these indices. Afterwards, we were interested in the choice of the code simulations. We studied the influence of different sampling strategies on the predictiveness of the Gaussian process meta model. Finally, we extended our statistical tools to a functional output of a computer code. We combined a decomposition on a wavelet basis with the Gaussian process modelling before computing the functional sensitivity indices. All the tools and statistical methodologies that we developed were applied to the real case of a complex hydrogeological computer code, simulating radionuclide transport in groundwater. (author) [fr

  11. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    Science.gov (United States)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  12. Disposal Notifications and Quarterly Membership Updates for the Utility Solid Waste Group Members’ Risk-Based Approvals to Dispose of PCB Remediation Waste Under Title 40 of the Code of Federal Regulations Section 761.61(c)

    Science.gov (United States)

    Disposal Notifications and Quarterly Membership Updates for the Utility Solid Waste Group Members’ Risk-Based Approvals to Dispose of Polychlorinated Biphenyl (PCB) Remediation Waste Under Title 40 of the Code of Federal Regulations Section 761.61(c)

  13. JASMINE-pro: A computer code for the analysis of propagation process in steam explosions. User's manual

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yanhua; Nilsuwankosit, Sunchai; Moriyama, Kiyofumi; Maruyama, Yu; Nakamura, Hideo; Hashimoto, Kazuichiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2000-12-01

    A steam explosion is a phenomenon where a high temperature liquid gives its internal energy very rapidly to another low temperature volatile liquid, causing very strong pressure build up due to rapid vaporization of the latter. In the field of light water reactor safety research, steam explosions caused by the contact of molten core and coolant has been recognized as a potential threat which could cause failure of the pressure vessel or the containment vessel during a severe accident. A numerical simulation code JASMINE was developed at Japan Atomic Energy Research Institute (JAERI) to evaluate the impact of steam explosions on the integrity of reactor boundaries. JASMINE code consists of two parts, JASMINE-pre and -pro, which handle the premixing and propagation phases in steam explosions, respectively. JASMINE-pro code simulates the thermo-hydrodynamics in the propagation phase of a steam explosion on the basis of the multi-fluid model for multiphase flow. This report, 'User's Manual', gives the usage of JASMINE-pro code as well as the information on the code structures which should be useful for users to understand how the code works. (author)

  14. JASMINE-pro: A computer code for the analysis of propagation process in steam explosions. User's manual

    International Nuclear Information System (INIS)

    Yang, Yanhua; Nilsuwankosit, Sunchai; Moriyama, Kiyofumi; Maruyama, Yu; Nakamura, Hideo; Hashimoto, Kazuichiro

    2000-12-01

    A steam explosion is a phenomenon where a high temperature liquid gives its internal energy very rapidly to another low temperature volatile liquid, causing very strong pressure build up due to rapid vaporization of the latter. In the field of light water reactor safety research, steam explosions caused by the contact of molten core and coolant has been recognized as a potential threat which could cause failure of the pressure vessel or the containment vessel during a severe accident. A numerical simulation code JASMINE was developed at Japan Atomic Energy Research Institute (JAERI) to evaluate the impact of steam explosions on the integrity of reactor boundaries. JASMINE code consists of two parts, JASMINE-pre and -pro, which handle the premixing and propagation phases in steam explosions, respectively. JASMINE-pro code simulates the thermo-hydrodynamics in the propagation phase of a steam explosion on the basis of the multi-fluid model for multiphase flow. This report, 'User's Manual', gives the usage of JASMINE-pro code as well as the information on the code structures which should be useful for users to understand how the code works. (author)

  15. Development of TPNCIRC code for Evaluation of Two-Phase Natural Circulation Flow Performance under External Reactor Vessel Cooling Conditions

    International Nuclear Information System (INIS)

    Choi, A-Reum; Song, Hyuk-Jin; Park, Jong-Woon

    2015-01-01

    During a severe accident, corium is relocated to the lower head of the nuclear reactor pressure vessel (RPV). Design concept of retaining the corium inside a nuclear reactor pressure vessel (RPV) through external cooling under hypothetical core melting accidents is called external reactor vessel cooling (ERVC). In this respect, validated two-phase natural circulation flow (TPNC) model is necessary to determine the adequacy of the ERVC design and operating conditions such as inlet area, form losses, gap distance, riser length and coolant conditions. The most important model generally characterizing the TPNC are void fraction and two-phase friction factors. Typical experimental and analytical studies to be referred to on two-phase circulation flow characteristics are those by Reyes, Gartia et al. based on Vijayan et al., Nayak et al. and Dubey et al. In the present paper, two-phase natural circulation (TPNC) flow characteristics under external reactor vessel cooling (ERVC) conditions are studied using two existing TPNC flow models of Reyes and Gartia et al. incorporating more improved void fraction and two-phase friction models. These models and correlations are integrated into a computer program, TPNCIRC, which can handle candidate ERVC design parameters, such as inlet, riser and downcomer flow lengths and areas, gap size between reactor vessel and surrounding insulations, minor loss factors and operating parameters of decay power, pressure and subcooling. Accuracy of the TPNCIRC program is investigated with respect to the flow rate and void fractions for existing measured data from a general experiment and ULPU specifically designed for the AP1000 in-vessel retention. Also, the effect of some important design parameters are examined for the experimental and plant conditions. Using the flow models and correlations are integrated into a computer program, TPNCIRC, a number of correlations have been examined. This seems coming from the differences of void fractions

  16. Exploring the Use of Design of Experiments in Industrial Processes Operating Under Closed-Loop Control

    DEFF Research Database (Denmark)

    Capaci, Francesca; Kulahci, Murat; Vanhatalo, Erik

    2017-01-01

    Industrial manufacturing processes often operate under closed-loop control, where automation aims to keep important process variables at their set-points. In process industries such as pulp, paper, chemical and steel plants, it is often hard to find production processes operating in open loop....... Instead, closed-loop control systems will actively attempt to minimize the impact of process disturbances. However, we argue that an implicit assumption in most experimental investigations is that the studied system is open loop, allowing the experimental factors to freely affect the important system...... responses. This scenario is typically not found in process industries. The purpose of this article is therefore to explore issues of experimental design and analysis in processes operating under closed-loop control and to illustrate how Design of Experiments can help in improving and optimizing...

  17. FIFI 3: A digital computer code for the solution of sets of first order differential equations and the analysis of process plant dynamics

    International Nuclear Information System (INIS)

    Sumner, H.M.

    1965-11-01

    FIFI 3 is a FORTRAN Code embodying a technique for the analysis of process plant dynamics. As such, it is essentially a tool for the integration of sets of first order ordinary differential equations, either linear or non-linear; special provision is made for the inclusion of time-delayed variables in the mathematical model of the plant. The method of integration is new and is centred on a stable multistep predictor-corrector algorithm devised by the late Mr. F.G. Chapman, of the UKAEA, Winfrith. The theory on which the Code is based and detailed rules for using it are described in Parts I and II respectively. (author)

  18. Calculation of the real states of Ignalina NPP Unit 1 and Unit 2 RBMK-1500 reactors in the verification process of QUABOX/CUBBOX code

    International Nuclear Information System (INIS)

    Bubelis, E.; Pabarcius, R.; Demcenko, M.

    2001-01-01

    Calculations of the main neutron-physical characteristics of RBMK-1500 reactors of Ignalina NPP Unit 1 and Unit 2 were performed, taking real reactor core states as the basis for these calculations. Comparison of the calculation results, obtained using QUABOX/CUBBOX code, with experimental data and the calculation results, obtained using STEPAN code, showed that all the main neutron-physical characteristics of the reactors of Unit 1 and Unit 2 of Ignalina NPP are in the safe deviation range of die analyzed parameters, and that reactors of Ignalina NPP, during the process of the reactor core composition change, are operated in a safe and stable manner. (author)

  19. 20 CFR 10.7 - What forms are needed to process claims under the FECA?

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false What forms are needed to process claims under the FECA? 10.7 Section 10.7 Employees' Benefits OFFICE OF WORKERS' COMPENSATION PROGRAMS, DEPARTMENT...' COMPENSATION ACT, AS AMENDED General Provisions Definitions and Forms § 10.7 What forms are needed to process...

  20. Thermal-hydraulic analysis under partial loss of flow accident hypothesis of a plate-type fuel surrounded by two water channels using RELAP5 code

    Directory of Open Access Journals (Sweden)

    Itamar Iliuk

    2016-01-01

    Full Text Available Thermal-hydraulic analysis of plate-type fuel has great importance to the establishment of safety criteria, also to the licensing of the future nuclear reactor with the objective of propelling the Brazilian nuclear submarine. In this work, an analysis of a single plate-type fuel surrounding by two water channels was performed using the RELAP5 thermal-hydraulic code. To realize the simulations, a plate-type fuel with the meat of uranium dioxide sandwiched between two Zircaloy-4 plates was proposed. A partial loss of flow accident was simulated to show the behavior of the model under this type of accident. The results show that the critical heat flux was detected in the central region along the axial direction of the plate when the right water channel was blocked.

  1. Fuel performance analysis code 'FAIR'

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.; Mahajan, S.C.; Kakodkar, A.

    1994-01-01

    For modelling nuclear reactor fuel rod behaviour of water cooled reactors under severe power maneuvering and high burnups, a mechanistic fuel performance analysis code FAIR has been developed. The code incorporates finite element based thermomechanical module, physically based fission gas release module and relevant models for modelling fuel related phenomena, such as, pellet cracking, densification and swelling, radial flux redistribution across the pellet due to the build up of plutonium near the pellet surface, pellet clad mechanical interaction/stress corrosion cracking (PCMI/SSC) failure of sheath etc. The code follows the established principles of fuel rod analysis programmes, such as coupling of thermal and mechanical solutions along with the fission gas release calculations, analysing different axial segments of fuel rod simultaneously, providing means for performing local analysis such as clad ridging analysis etc. The modular nature of the code offers flexibility in affecting modifications easily to the code for modelling MOX fuels and thorium based fuels. For performing analysis of fuel rods subjected to very long power histories within a reasonable amount of time, the code has been parallelised and is commissioned on the ANUPAM parallel processing system developed at Bhabha Atomic Research Centre (BARC). (author). 37 refs

  2. A quantitative approach to modeling the information processing of NPP operators under input information overload

    International Nuclear Information System (INIS)

    Kim, Jong Hyun; Seong, Poong Hyun

    2002-01-01

    This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task under input information overload. We primarily develop the information processing model having multiple stages, which contains information flow. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory. We also investigate the applicability of this approach to quantifying the information reduction of operators under the input information overload

  3. BURDEN OF PROOF IN CONSTITUTIONAL PROCESS UNDER CONSIDERATION OF DEMOCRATIC STATE

    Directory of Open Access Journals (Sweden)

    Patrícia Mendanha Dias

    2016-12-01

    Full Text Available Under the aegis of the democratic rule of law, the contradictory it appears as a fundamental premise of constitutionalised process. In this view, the contradictory, more than just right party to exercise its right of defense, it should be seen as a form of compartipação in the process, allowing the parties to the probative production as an effective influence on acertamento right. Thus, it should be disproved the right mitigation attempts contradictory in the evidentiary phase and imposing aside the need to produce proofimpossible or difficult to perform, under penalty of offending the process understood in the modern normative framework.

  4. Theoretical Background for the Decision-Making Process Modelling under Controlled Intervention Conditions

    OpenAIRE

    Bakanauskienė Irena; Baronienė Laura

    2017-01-01

    This article is intended to theoretically justify the decision-making process model for the cases, when active participation of investing entities in controlling the activities of an organisation and their results is noticeable. Based on scientific literature analysis, a concept of controlled conditions is formulated, and using a rational approach to the decision-making process, a model of the 11-steps decision-making process under controlled intervention is presented. Also, there have been u...

  5. Post-Processing of Dynamic Gadolinium-Enhanced Magnetic Resonance Imaging Exams of the Liver: Explanation and Potential Clinical Applications for Color-Coded Qualitative and Quantitative Analysis

    International Nuclear Information System (INIS)

    Wang, L.; Bos, I.C. Van den; Hussain, S.M.; Pattynama, P.M.; Vogel, M.W.; Kr estin, G.P.

    2008-01-01

    The purpose of this article is to explain and illustrate the current status and potential applications of automated and color-coded post-processing techniques for the analysis of dynamic multiphasic gadolinium-enhanced magnetic resonance imaging (MRI) of the liver. Post-processing of these images on dedicated workstations allows the generation of time-intensity curves (TIC) as well as color-coded images, which provides useful information on (neo)-angiogenesis within a liver lesion, if necessary combined with information on enhancement patterns of the surrounding liver parenchyma. Analysis of TIC and color-coded images, which are based on pharmacokinetic modeling, provides an easy-to-interpret schematic presentation of tumor behavior, providing additional characteristics for adequate differential diagnosis. Inclusion of TIC and color-coded images as part of the routine abdominal MRI workup protocol may help to further improve the specificity of MRI findings, but needs to be validated in clinical decision-making situations. In addition, these tools may facilitate the diagnostic workup of disease for detection, characterization, staging, and monitoring of antitumor therapy, and hold incremental value to the widely used tumor response criteria

  6. Current status of high energy nucleon-meson transport code

    Energy Technology Data Exchange (ETDEWEB)

    Takada, Hiroshi; Sasa, Toshinobu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    Current status of design code of accelerator (NMTC/JAERI code), outline of physical model and evaluation of accuracy of code were reported. To evaluate the nuclear performance of accelerator and strong spallation neutron origin, the nuclear reaction between high energy proton and target nuclide and behaviors of various produced particles are necessary. The nuclear design of spallation neutron system used a calculation code system connected the high energy nucleon{center_dot}meson transport code and the neutron{center_dot}photon transport code. NMTC/JAERI is described by the particle evaporation process under consideration of competition reaction of intranuclear cascade and fission process. Particle transport calculation was carried out for proton, neutron, {pi}- and {mu}-meson. To verify and improve accuracy of high energy nucleon-meson transport code, data of spallation and spallation neutron fragment by the integral experiment were collected. (S.Y.)

  7. Internal mechanisms underlying anticipatory language processing: Evidence from event-related-potentials and neural oscillations.

    Science.gov (United States)

    Li, Xiaoqing; Zhang, Yuping; Xia, Jinyan; Swaab, Tamara Y

    2017-07-28

    Although numerous studies have demonstrated that the language processing system can predict upcoming content during comprehension, there is still no clear picture of the anticipatory stage of predictive processing. This electroencephalograph study examined the cognitive and neural oscillatory mechanisms underlying anticipatory processing during language comprehension, and the consequences of this prediction for bottom-up processing of predicted/unpredicted content. Participants read Mandarin Chinese sentences that were either strongly or weakly constraining and that contained critical nouns that were congruent or incongruent with the sentence contexts. We examined the effects of semantic predictability on anticipatory processing prior to the onset of the critical nouns and on integration of the critical nouns. The results revealed that, at the integration stage, the strong-constraint condition (compared to the weak-constraint condition) elicited a reduced N400 and reduced theta activity (4-7Hz) for the congruent nouns, but induced beta (13-18Hz) and theta (4-7Hz) power decreases for the incongruent nouns, indicating benefits of confirmed predictions and potential costs of disconfirmed predictions. More importantly, at the anticipatory stage, the strongly constraining context elicited an enhanced sustained anterior negativity and beta power decrease (19-25Hz), which indicates that strong prediction places a higher processing load on the anticipatory stage of processing. The differences (in the ease of processing and the underlying neural oscillatory activities) between anticipatory and integration stages of lexical processing were discussed with regard to predictive processing models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Theoretical Background for the Decision-Making Process Modelling under Controlled Intervention Conditions

    Directory of Open Access Journals (Sweden)

    Bakanauskienė Irena

    2017-12-01

    Full Text Available This article is intended to theoretically justify the decision-making process model for the cases, when active participation of investing entities in controlling the activities of an organisation and their results is noticeable. Based on scientific literature analysis, a concept of controlled conditions is formulated, and using a rational approach to the decision-making process, a model of the 11-steps decision-making process under controlled intervention is presented. Also, there have been unified conditions, describing the case of controlled interventions thus providing preconditions to ensure the adequacy of the proposed decision-making process model.

  9. Quality assessment of baby food made of different pre-processed organic raw materials under industrial processing conditions.

    Science.gov (United States)

    Seidel, Kathrin; Kahl, Johannes; Paoletti, Flavio; Birlouez, Ines; Busscher, Nicolaas; Kretzschmar, Ursula; Särkkä-Tirkkonen, Marjo; Seljåsen, Randi; Sinesio, Fiorella; Torp, Torfinn; Baiamonte, Irene

    2015-02-01

    The market for processed food is rapidly growing. The industry needs methods for "processing with care" leading to high quality products in order to meet consumers' expectations. Processing influences the quality of the finished product through various factors. In carrot baby food, these are the raw material, the pre-processing and storage treatments as well as the processing conditions. In this study, a quality assessment was performed on baby food made from different pre-processed raw materials. The experiments were carried out under industrial conditions using fresh, frozen and stored organic carrots as raw material. Statistically significant differences were found for sensory attributes among the three autoclaved puree samples (e.g. overall odour F = 90.72, p processed from frozen carrots show increased moisture content and decrease of several chemical constituents. Biocrystallization identified changes between replications of the cooking. Pre-treatment of raw material has a significant influence on the final quality of the baby food.

  10. Tardos fingerprinting codes in the combined digit model

    NARCIS (Netherlands)

    Skoric, B.; Katzenbeisser, S.; Schaathun, H.G.; Celik, M.U.

    2009-01-01

    We introduce a new attack model for collusion-secure codes, called the combined digit model, which represents signal processing attacks against the underlying watermarking level better than existing models. In this paper, we analyze the performance of two variants of the Tardos code and show that

  11. Melting and evaporation analysis of the first wall in a water-cooled breeding blanket module under vertical displacement event by using the MARS code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Geon-Woo [Department of Nuclear Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826 (Korea, Republic of); Cho, Hyoung-Kyu, E-mail: chohk@snu.ac.kr [Department of Nuclear Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826 (Korea, Republic of); Park, Goon-Cherl [Department of Nuclear Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826 (Korea, Republic of); Im, Kihak [National Fusion Research Institute, 169-148 Gwahak-ro, Yuseong-gu, Daejeon 34133 (Korea, Republic of)

    2017-05-15

    Highlights: • Material phase change of first wall was simulated for vertical displacement event. • An in-house first wall module was developed to simulate melting and evaporation. • Effective heat capacity method and evaporation model were proposed. • MARS code was proposed to predict two-phase phenomena in coolant channel. • Phase change simulation was performed by coupling MARS and in-house module. - Abstract: Plasma facing components of tokamak reactors such as ITER or the Korean fusion demonstration reactor (K-DEMO) can be subjected to damage by plasma instabilities. Plasma disruptions like vertical displacement event (VDE) with high heat flux, can cause melting and vaporization of plasma facing materials and burnout of coolant channels. In this study, to simulate melting and vaporization of the first wall in a water-cooled breeding blanket under VDE, one-dimensional heat equations were solved numerically by using an in-house first wall module, including phase change models, effective heat capacity method, and evaporation model. For thermal-hydraulics, the in-house first wall analysis module was coupled with the nuclear reactor safety analysis code, MARS, to take advantage of its prediction capability for two-phase flow and critical heat flux (CHF) occurrence. The first wall was proposed for simulation according to the conceptual design of the K-DEMO, and the heat flux of plasma disruption with a value of 600 MW/m{sup 2} for 0.1 s was applied. The phase change simulation results were analyzed in terms of the melting and evaporation thicknesses and the occurrence of CHF. The thermal integrity of the blanket first wall is discussed to confirm whether the structural material melts for the given conditions.

  12. Computer codes for safety analysis

    International Nuclear Information System (INIS)

    Holland, D.F.

    1986-11-01

    Computer codes for fusion safety analysis have been under development in the United States for about a decade. This paper will discuss five codes that are currently under development by the Fusion Safety Program. The purpose and capability of each code will be presented, a sample given, followed by a discussion of the present status and future development plans

  13. Methodology for optimization of process integration schemes in a biorefinery under uncertainty

    International Nuclear Information System (INIS)

    Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >González-Cortés, Meilyn; Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >Martínez-Martínez, Yenisleidys; Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >Albernas-Carvajal, Yailet; Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >Pedraza-Garciga, Julio; Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >Morales-Zamora, Marlen

    2017-01-01

    The uncertainty has a great impact in the investment decisions, operability of the plants and in the feasibility of integration opportunities in the chemical processes. This paper, presents the steps to consider the optimization of process investment in the processes integration under conditions of uncertainty. It is shown the potentialities of the biomass cane of sugar for the integration with several plants in a biorefinery scheme for the obtaining chemical products, thermal and electric energy. Among the factories with potentialities for this integration are the pulp and paper and sugar factories and other derivative processes. Theses factories have common resources and also have a variety of products that can be exchange between them so certain products generated in a one of them can be raw matter in another plant. The methodology developed guide to obtaining of feasible investment projects under uncertainty. As objective function was considered the maximization of net profitable value in different scenarios that are generated from the integration scheme. (author)

  14. Method for improvement of gamma-transition cascade spectra amplitude resolution by computer processing of coincidence codes

    International Nuclear Information System (INIS)

    Sukhovoj, A.M.; Khitrov, V.A.

    1982-01-01

    A method of improvement of amplitude resolution in the case of record of coinciding codes on the magnetic tape is suggested. It is shown on the record with Ge(Li) detectors of cascades of gamma-transitions from the 35 Cl(n, #betta#) reaction that total width at a half maximum of the peak may decrease by a factor of 2.6 for quanta with the energy similar to the neutron binding energy. Efficiency loss is absent

  15. Supporting the Cybercrime Investigation Process: Effective Discrimination of Source Code Authors Based on Byte-Level Information

    Science.gov (United States)

    Frantzeskou, Georgia; Stamatatos, Efstathios; Gritzalis, Stefanos

    Source code authorship analysis is the particular field that attempts to identify the author of a computer program by treating each program as a linguistically analyzable entity. This is usually based on other undisputed program samples from the same author. There are several cases where the application of such a method could be of a major benefit, such as tracing the source of code left in the system after a cyber attack, authorship disputes, proof of authorship in court, etc. In this paper, we present our approach which is based on byte-level n-gram profiles and is an extension of a method that has been successfully applied to natural language text authorship attribution. We propose a simplified profile and a new similarity measure which is less complicated than the algorithm followed in text authorship attribution and it seems more suitable for source code identification since is better able to deal with very small training sets. Experiments were performed on two different data sets, one with programs written in C++ and the second with programs written in Java. Unlike the traditional language-dependent metrics used by previous studies, our approach can be applied to any programming language with no additional cost. The presented accuracy rates are much better than the best reported results for the same data sets.

  16. Tritium test of the tritium processing components under the Annex III US-Japan Collaboration

    International Nuclear Information System (INIS)

    Konishi, Satoshi; Yoshida, Hiroshi; Naruse, Yuji; Binning, K.E.; Carlson, R.V.; Bartlit, J.R.; Anderson, J.L.

    1993-03-01

    The process ready components for Fuel Cleanup System were tested at the TSTA under the US-Japan Collaboration program. Palladium diffuser for tritium purification and Ceramic Electrolysis Cell for decomposition of tritiated water respectively were tested with pure tritium for years. The characteristics of the components with hydrogen isotopes, effects of impurities, and long-term reliability of the components were studied. It was concluded that these components are suitable and attractive for fusion fuel processing systems. (author)

  17. Training processes in under 6s football competition: The transition from ingenuity to institutionalization

    OpenAIRE

    Abel Merino Orozco; Ana Arraiz Pérez; Fernando Sabirón Sierra

    2016-01-01

    Under 6s football competition is a school sport that has inherent educational implications. Moreover, it is a booming non-formal socio-educational framework where families and children lay training expectations and dreams. The aim is to comprehend the emerging learning processes promoted in this environment for 6 years-old children, when the child starts the institutionalization process in the ruled sport. The research uses a case study design, the ethnographic mode, through participant obser...

  18. THE APPEAL CALLED “EMBARGOS DE DECLARAÇÃO” IN ELECTORAL PROCESS: A BRIEF VIEW AFTER THE BRAZILIAN NEW CIVIL PROCEDURE CODE

    Directory of Open Access Journals (Sweden)

    Rodrigo Mazzei

    2016-12-01

    Full Text Available This paper analyzes the regulation of embargos de declaração with in the electoral process, as well as the interpretation that has been given by the Courts. Addressing essencial issues of embargos de declaração, as the deadline, legal nature, suitability hypothesis and suspensive effect, many of which are the subject of discussion in doctrine and jurisprudence, mainly due to diversification and variety of rules dealing with the subject (Electoral Code, Regiments Internal of Courts and Civil Procedure Code and Criminal Procedure Code - alternatively applied, besides the need for a constitutional interpretation focused on the embargos de declaração. Observes the proposals of the Project of the New CPC, pending in the legislative, for the regulation of embargos de declaração and the impacts that this new text will bring to the electoral process, pointing out possible ways to conciliation between the “new” civil process and the electoral law.

  19. Genetic Code Analysis Toolkit: A novel tool to explore the coding properties of the genetic code and DNA sequences

    Science.gov (United States)

    Kraljić, K.; Strüngmann, L.; Fimmel, E.; Gumbel, M.

    2018-01-01

    The genetic code is degenerated and it is assumed that redundancy provides error detection and correction mechanisms in the translation process. However, the biological meaning of the code's structure is still under current research. This paper presents a Genetic Code Analysis Toolkit (GCAT) which provides workflows and algorithms for the analysis of the structure of nucleotide sequences. In particular, sets or sequences of codons can be transformed and tested for circularity, comma-freeness, dichotomic partitions and others. GCAT comes with a fertile editor custom-built to work with the genetic code and a batch mode for multi-sequence processing. With the ability to read FASTA files or load sequences from GenBank, the tool can be used for the mathematical and statistical analysis of existing sequence data. GCAT is Java-based and provides a plug-in concept for extensibility. Availability: Open source Homepage:http://www.gcat.bio/

  20. Identification of important phenomena under sodium fire accidents based on PIRT process with factor analysis in sodium-cooled fast reactor

    International Nuclear Information System (INIS)

    Aoyagi, Mitsuhiro; Uchibori, Akihiro; Kikuchi, Shin; Takata, Takashi; Ohno, Shuji; Ohshima, Hiroyuki

    2016-01-01

    The PIRT (Phenomena Identification and Ranking Table) process is an effective method to identify key phenomena involved in safety issues in nuclear power plants. The present PIRT process is aimed to validate sodium fire analysis codes. Because a sodium fire accident in sodium-cooled fast reactor (SFR) involves complex phenomena, various figures of merit (FOMs) could exist in this PIRT process. In addition, importance evaluation of phenomena for each FOM should be implemented in an objective manner under the PIRT process. This paper describes the methodology for specification of FOMs, identification of associated phenomena and importance evaluation of each associated phenomenon in order to complete a ranking table of important phenomena involved in a sodium fire accident in an SFR. The FOMs were specified through factor analysis in this PIRT process. Physical parameters to be quantified by a sodium fire analysis code were identified by considering concerns resulting from sodium fire in the factor analysis. Associated phenomena were identified through the element- and sequence-based phenomena analyses as is often conducted in PIRT processes. Importance of each associated phenomenon was evaluated by considering the sequence-based analysis of associated phenomena correlated with the FOMs. Then, we complete the ranking table through the factor and phenomenon analyses. (author)

  1. NAGRADATA. Code key. Geology

    International Nuclear Information System (INIS)

    Mueller, W.H.; Schneider, B.; Staeuble, J.

    1984-01-01

    This reference manual provides users of the NAGRADATA system with comprehensive keys to the coding/decoding of geological and technical information to be stored in or retreaved from the databank. Emphasis has been placed on input data coding. When data is retreaved the translation into plain language of stored coded information is done automatically by computer. Three keys each, list the complete set of currently defined codes for the NAGRADATA system, namely codes with appropriate definitions, arranged: 1. according to subject matter (thematically) 2. the codes listed alphabetically and 3. the definitions listed alphabetically. Additional explanation is provided for the proper application of the codes and the logic behind the creation of new codes to be used within the NAGRADATA system. NAGRADATA makes use of codes instead of plain language for data storage; this offers the following advantages: speed of data processing, mainly data retrieval, economies of storage memory requirements, the standardisation of terminology. The nature of this thesaurian type 'key to codes' makes it impossible to either establish a final form or to cover the entire spectrum of requirements. Therefore, this first issue of codes to NAGRADATA must be considered to represent the current state of progress of a living system and future editions will be issued in a loose leave ringbook system which can be updated by an organised (updating) service. (author)

  2. Stochastic stability of mechanical systems under renewal jump process parametric excitation

    DEFF Research Database (Denmark)

    Iwankiewicz, R.; Nielsen, Søren R.K.; Larsen, Jesper Winther

    2005-01-01

    A dynamic system under parametric excitation in the form of a non-Erlang renewal jump process is considered. The excitation is a random train of nonoverlapping rectangular pulses with equal, deterministic heights. The time intervals between two consecutive jumps up (or down), are the sum of two...

  3. The Orexin Component of Fasting Triggers Memory Processes Underlying Conditioned Food Selection in the Rat

    Science.gov (United States)

    Ferry, Barbara; Duchamp-Viret, Patricia

    2014-01-01

    To test the selectivity of the orexin A (OXA) system in olfactory sensitivity, the present study compared the effects of fasting and of central infusion of OXA on the memory processes underlying odor-malaise association during the conditioned odor aversion (COA) paradigm. Animals implanted with a cannula in the left ventricle received ICV infusion…

  4. A cyano-terminated dithienyldiketopyrrolopyrrole dimer as a solution processable ambipolar semiconductor under ambient conditions.

    Science.gov (United States)

    Wang, Li; Zhang, Xiaojie; Tian, Hongkun; Lu, Yunfeng; Geng, Yanhou; Wang, Fosong

    2013-12-14

    A cyano-terminated dimer of dithienyldiketopyrrolopyrrole (TDPP), DPP2-CN, is a solution processable ambipolar semiconductor with field-effect hole and electron mobilities of 0.066 and 0.033 cm(2) V(-1) s(-1), respectively, under ambient conditions.

  5. High paraffin Kumkol petroleum processing under fuel and lubricant petroleum scheme

    International Nuclear Information System (INIS)

    Nadirov, N.K.; Konaev, Eh.N.

    1997-01-01

    Technological opportunity of high paraffin Kumkol petroleum processing under the fuel and lubricant scheme with production of lubricant materials in short supply, combustible materials and technical paraffin is shown. Mini petroleum block putting into operation on Kumkol deposit is reasonable economically and raises profitableness of hydrocarbon raw material production. (author)

  6. 77 FR 43492 - Expedited Vocational Assessment Under the Sequential Evaluation Process

    Science.gov (United States)

    2012-07-25

    ..., or visit our Internet site, Social Security Online, at http://www.socialsecurity.gov . SUPPLEMENTARY... SOCIAL SECURITY ADMINISTRATION 20 CFR Parts 404 and 416 [Docket No. SSA-2010-0060] RIN 0960-AH26 Expedited Vocational Assessment Under the Sequential Evaluation Process AGENCY: Social Security...

  7. 78 FR 70088 - Agency Proposed Business Process Vision Under the Rehabilitation Act of 1973

    Science.gov (United States)

    2013-11-22

    ... site, Social Security Online, at http://www.socialsecurity.gov . SUPPLEMENTARY INFORMATION: Background... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA-2013-0042] Agency Proposed Business Process Vision Under the Rehabilitation Act of 1973 AGENCY: Social Security Administration (SSA). ACTION: Notice of...

  8. Handedness is related to neural mechanisms underlying hemispheric lateralization of face processing

    Science.gov (United States)

    Frässle, Stefan; Krach, Sören; Paulus, Frieder Michel; Jansen, Andreas

    2016-06-01

    While the right-hemispheric lateralization of the face perception network is well established, recent evidence suggests that handedness affects the cerebral lateralization of face processing at the hierarchical level of the fusiform face area (FFA). However, the neural mechanisms underlying differential hemispheric lateralization of face perception in right- and left-handers are largely unknown. Using dynamic causal modeling (DCM) for fMRI, we aimed to unravel the putative processes that mediate handedness-related differences by investigating the effective connectivity in the bilateral core face perception network. Our results reveal an enhanced recruitment of the left FFA in left-handers compared to right-handers, as evidenced by more pronounced face-specific modulatory influences on both intra- and interhemispheric connections. As structural and physiological correlates of handedness-related differences in face processing, right- and left-handers varied with regard to their gray matter volume in the left fusiform gyrus and their pupil responses to face stimuli. Overall, these results describe how handedness is related to the lateralization of the core face perception network, and point to different neural mechanisms underlying face processing in right- and left-handers. In a wider context, this demonstrates the entanglement of structurally and functionally remote brain networks, suggesting a broader underlying process regulating brain lateralization.

  9. AXAIR: A Computer Code for SAR Assessment of Plume-Exposure Doses from Potential Process-Accident Releases to Atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Pillinger, W.L.

    2001-05-17

    This report describes the AXAIR computer code which is available to terminal users for evaluating the doses to man from exposure to the atmospheric plume from postulated stack or building-vent releases at the Savannah River Plant. The emphasis herein is on documentation of the methodology only. The total-body doses evaluated are those that would be exceeded only 0.5 percent of the time based on worst-sector, worst-case meteorological probability analysis. The associated doses to other body organs are given in the dose breakdowns by radionuclide, body organ and pathway.

  10. Software supervisor: extension to the on-line codes utilization in order to help the process control

    International Nuclear Information System (INIS)

    Thomas, J.B.; Dumas, M.; Evrard, J.M.

    1988-01-01

    Calculation is a complex problem, which is usually solved by human experts. The complexity and the potentiality of the software increases. The introduction of the calculations in real time systems needs additional specifications. These aims cab be achieved by means of the control of the knowledge based systems, as well as by the introduction of the software techniques in the existing computer world. The following examples are given: the automatic generation of the calculation methods (in control language) in the modular code systems; the calculations monitoring by the expert systems, in order to help the on-line operations [fr

  11. Sipi soup inhibits cancer‑associated fibroblast activation and the inflammatory process by downregulating long non‑coding RNA HIPK1‑AS.

    Science.gov (United States)

    Zhou, Bingxiu; Yu, Yuanyuan; Yu, Lixia; Que, Binfu; Qiu, Rui

    2018-06-06

    Sipi soup (SPS), the aqueous extract derived from the root bark of Sophora japonical L, Salix babylonica L., Morus alba L., as well as Amygdalus davidiana (Carr.) C. de Vos, is a traditional Chinese medicine frequently used to prevent and treat infection and inflammation. However, the role of SPS in cancer‑associated fibroblasts (CAFs) require further investigation. In the present study, the effects of SPS on fibroblast inactivation and the underlying mechanism were investigated. Reverse transcription‑quantitative polymerase chain reaction was used to analyze the mRNA expression levels of fibroblast activation protein (FAP), interleukin (IL)‑6, α‑smooth muscle actin (α‑SMA) and programmed cell death 4 (PDCD4). Flow cytometry was used to evaluate cell apoptosis. Immunofluorescence was used to determine the number of activated fibroblasts. The present study reported that SPS treatment did not affect the proliferative apoptotic potential of fibroblasts. Treatment with HeLa cell culture medium (CM) induced a significant increase in the expression levels of FAP, IL‑6 and α‑SMA, but reduced the expression of PDCD4. SPS reversed the effects of HeLa CM on the expression of these genes. Analysis with a long non‑coding (lnc)RNA array of numerous differentially expressed lncRNAs revealed that the expression levels of the lncRNA homeodomain‑interacting protein kinase 1 antisense RNA (HIPK1‑AS) were increased in cervicitis tissues and cervical squamous cell carcinoma tissues compared with in normal cervical tissues. HIPK1‑AS expression levels were upregulated in response to HeLa CM, but were decreased under SPS treatment. The downregulation of HIPK1‑AS expression via short hairpin RNA abolished the effects of HeLa CM on the expression of inflammation‑associated genes. The findings of the present study suggested that SPS may prevent the progression of cervical cancer by inhibiting the activation of CAF and the inflammatory process by reducing HIPK1

  12. Ultrasonic signal processing and B-SCAN imaging for nondestructive testing. Application to under - cladding - cracks

    International Nuclear Information System (INIS)

    Theron, G.

    1988-02-01

    Crack propagation under the stainless steel cladding of nuclear reactor vessels is monitored by ultrasonic testing. This work study signal processing to improve detection and sizing of defects. Two possibilities are examined: processing of each individual signal and simultaneous processing of all the signals giving a B-SCAN image. The bibliographic study of time-frequency methods shows that they are not suitable for pulses. Then decomposition in instantaneous frequency and envelope is used. Effect of interference of 2 close echoes on instantaneous frequency is studies. The deconvolution of B-SCAN images is obtained by the transducer field. A point-by-point deconvolution method, less noise sensitive, is developed. B-SCAN images are processed in 2 phases: interface signal processing and deconvolution. These calculations improve image accuracy and dynamics. Water-stell interface and ferritic-austenitic interface are separated. Echoes of crack top are visualized and crack-hole differentiation is improved [fr

  13. Working through the pain: working memory capacity and differences in processing and storage under pain.

    Science.gov (United States)

    Sanchez, Christopher A

    2011-02-01

    It has been suggested that pain perception and attention are closely linked at both a neural and a behavioural level. If pain and attention are so linked, it is reasonable to speculate that those who vary in working memory capacity (WMC) should be affected by pain differently. This study compares the performance of individuals who differ in WMC as they perform processing and memory span tasks while under mild pain and not. While processing performance under mild pain does not interact with WMC, the ability to store information for later recall does. This suggests that pain operates much like an additional processing burden, and that the ability to overcome this physical sensation is related to differences in WMC. © 2011 Psychology Press, an imprint of the Taylor & Francis Group, an Informa business

  14. Mechanical and tribological behaviour of molten salt processed self-lubricated aluminium composite under different treatments

    Science.gov (United States)

    Kannan, C.; Ramanujam, R.

    2018-05-01

    The aim of this research work is to evaluate the mechanical and tribological behaviour of Al 7075 based self-lubricated hybrid nanocomposite under different treated conditions viz. as-cast, T6 and deep cryo treated. In order to overcome the drawbacks associated with conventional stir casting, a combinational approach that consists of molten salt processing, ultrasonic assistance and optimized mechanical stirring is adopted in this study to fabricate the nanocomposite. The mechanical characterisation tests carried out on this nanocomposite reveals an improvement of about 39% in hardness and 22% in ultimate tensile strength possible under T6 condition. Under specific conditions, the wear rate can be reduced to the extent of about 63% through the usage of self-lubricated hybrid nanocomposite under T6 condition.

  15. Improvement of the MSG code for the MONJU evaporators. Additional function of reverse flow calculation on water/steam model and animation for post processing

    International Nuclear Information System (INIS)

    Toda, Shin-ichi; Yoshikawa, Shinji; Oketani, Kazuhiro

    2003-05-01

    The improved version of the MSG code (Multi-dimensional Thermal-hydraulic Analysis Code for Steam Generators) has been released. It has been carried out to improve based on the original version in order to calculate reverse flow on water/steam side, and to animate the post-processing data. To calculate reverse flow locally, modification to set pressure at each divided node point of water/steam region in the helical-coil heat transfer tubes has been carried out. And the matrix solver has been also improved to treat a problem within practical calculation time against increasing the pressure points. In this case pressure and enthalpy have to be calculated simultaneously, however, it was found out that using the block-Jacobean method make a diagonal-dominant matrix, and solve the matrix efficiently with a relaxation method. As the result of calculations of a steady-state condition and a transient of SG blow down with manual trip operation, the improvement on calculation function of the MSG code was confirmed. And an animation function of temperature contour in the sodium shell side as a post processing has been added. Since the animation is very effective to understand thermal-hydraulic behavior on the sodium shell side of the SG, especially in case of transient condition, the analysis and evaluation of the calculation results will be enabled to be more quickly and effectively. (author)

  16. The parallel processing of EGS4 code on distributed memory scalar parallel computer:Intel Paragon XP/S15-256

    Energy Technology Data Exchange (ETDEWEB)

    Takemiya, Hiroshi; Ohta, Hirofumi; Honma, Ichirou

    1996-03-01

    The parallelization of Electro-Magnetic Cascade Monte Carlo Simulation Code, EGS4 on distributed memory scalar parallel computer: Intel Paragon XP/S15-256 is described. EGS4 has the feature that calculation time for one incident particle is quite different from each other because of the dynamic generation of secondary particles and different behavior of each particle. Granularity for parallel processing, parallel programming model and the algorithm of parallel random number generation are discussed and two kinds of method, each of which allocates particles dynamically or statically, are used for the purpose of realizing high speed parallel processing of this code. Among four problems chosen for performance evaluation, the speedup factors for three problems have been attained to nearly 100 times with 128 processor. It has been found that when both the calculation time for each incident particles and its dispersion are large, it is preferable to use dynamic particle allocation method which can average the load for each processor. And it has also been found that when they are small, it is preferable to use static particle allocation method which reduces the communication overhead. Moreover, it is pointed out that to get the result accurately, it is necessary to use double precision variables in EGS4 code. Finally, the workflow of program parallelization is analyzed and tools for program parallelization through the experience of the EGS4 parallelization are discussed. (author).

  17. DISP1 code

    International Nuclear Information System (INIS)

    Vokac, P.

    1999-12-01

    DISP1 code is a simple tool for assessment of the dispersion of the fission product cloud escaping from a nuclear power plant after an accident. The code makes it possible to tentatively check the feasibility of calculations by more complex PSA3 codes and/or codes for real-time dispersion calculations. The number of input parameters is reasonably low and the user interface is simple enough to allow a rapid processing of sensitivity analyses. All input data entered through the user interface are stored in the text format. Implementation of dispersion model corrections taken from the ARCON96 code enables the DISP1 code to be employed for assessment of the radiation hazard within the NPP area, in the control room for instance. (P.A.)

  18. Mapping Common Aphasia Assessments to Underlying Cognitive Processes and Their Neural Substrates.

    Science.gov (United States)

    Lacey, Elizabeth H; Skipper-Kallal, Laura M; Xing, Shihui; Fama, Mackenzie E; Turkeltaub, Peter E

    2017-05-01

    Understanding the relationships between clinical tests, the processes they measure, and the brain networks underlying them, is critical in order for clinicians to move beyond aphasia syndrome classification toward specification of individual language process impairments. To understand the cognitive, language, and neuroanatomical factors underlying scores of commonly used aphasia tests. Twenty-five behavioral tests were administered to a group of 38 chronic left hemisphere stroke survivors and a high-resolution magnetic resonance image was obtained. Test scores were entered into a principal components analysis to extract the latent variables (factors) measured by the tests. Multivariate lesion-symptom mapping was used to localize lesions associated with the factor scores. The principal components analysis yielded 4 dissociable factors, which we labeled Word Finding/Fluency, Comprehension, Phonology/Working Memory Capacity, and Executive Function. While many tests loaded onto the factors in predictable ways, some relied heavily on factors not commonly associated with the tests. Lesion symptom mapping demonstrated discrete brain structures associated with each factor, including frontal, temporal, and parietal areas extending beyond the classical language network. Specific functions mapped onto brain anatomy largely in correspondence with modern neural models of language processing. An extensive clinical aphasia assessment identifies 4 independent language functions, relying on discrete parts of the left middle cerebral artery territory. A better understanding of the processes underlying cognitive tests and the link between lesion and behavior may lead to improved aphasia diagnosis, and may yield treatments better targeted to an individual's specific pattern of deficits and preserved abilities.

  19. X-ray microtomography study of the compaction process of rods under tapping.

    Science.gov (United States)

    Fu, Yang; Xi, Yan; Cao, Yixin; Wang, Yujie

    2012-05-01

    We present an x-ray microtomography study of the compaction process of cylindrical rods under tapping. The process is monitored by measuring the evolution of the orientational order parameter, local, and overall packing densities as a function of the tapping number for different tapping intensities. The slow relaxation dynamics of the orientational order parameter can be well fitted with a stretched-exponential law with stretching exponents ranging from 0.9 to 1.6. The corresponding relaxation time versus tapping intensity follows an Arrhenius behavior which is reminiscent of the slow dynamics in thermal glassy systems. We also investigated the boundary effect on the ordering process and found that boundary rods order faster than interior ones. In searching for the underlying mechanism of the slow dynamics, we estimated the initial random velocities of the rods under tapping and found that the ordering process is compatible with a diffusion mechanism. The average coordination number as a function of the tapping number at different tapping intensities has also been measured, which spans a range from 6 to 8.

  20. Nuclear structure and weak rates of heavy waiting point nuclei under rp-process conditions

    Science.gov (United States)

    Nabi, Jameel-Un; Böyükata, Mahmut

    2017-01-01

    The structure and the weak interaction mediated rates of the heavy waiting point (WP) nuclei 80Zr, 84Mo, 88Ru, 92Pd and 96Cd along N = Z line were studied within the interacting boson model-1 (IBM-1) and the proton-neutron quasi-particle random phase approximation (pn-QRPA). The energy levels of the N = Z WP nuclei were calculated by fitting the essential parameters of IBM-1 Hamiltonian and their geometric shapes were predicted by plotting potential energy surfaces (PESs). Half-lives, continuum electron capture rates, positron decay rates, electron capture cross sections of WP nuclei, energy rates of β-delayed protons and their emission probabilities were later calculated using the pn-QRPA. The calculated Gamow-Teller strength distributions were compared with previous calculation. We present positron decay and continuum electron capture rates on these WP nuclei under rp-process conditions using the same model. For the rp-process conditions, the calculated total weak rates are twice the Skyrme HF+BCS+QRPA rates for 80Zr. For remaining nuclei the two calculations compare well. The electron capture rates are significant and compete well with the corresponding positron decay rates under rp-process conditions. The finding of the present study supports that electron capture rates form an integral part of the weak rates under rp-process conditions and has an important role for the nuclear model calculations.