WorldWideScience

Sample records for helio code prediction

  1. The HELIOS-2 lattice physics code

    International Nuclear Information System (INIS)

    Wemple, C.A.; Gheorghiu, H-N.M.; Stamm'ler, R.J.J.; Villarino, E.A.

    2008-01-01

    Major advances have been made in the HELIOS code, resulting in the impending release of a new version, HELIOS-2. The new code includes a method of characteristics (MOC) transport solver to supplement the existing collision probabilities (CP) solver. A 177-group, ENDF/B-VII nuclear data library has been developed for inclusion with the new code package. Computational tests have been performed to verify the performance of the MOC solver against the CP solver, and validation testing against computational and measured benchmarks is underway. Results to-date of the verification and validation testing are presented, demonstrating the excellent performance of the new transport solver and nuclear data library. (Author)

  2. Benchmark calculation for GT-MHR using HELIOS/MASTER code package and MCNP

    International Nuclear Information System (INIS)

    Lee, Kyung Hoon; Kim, Kang Seog; Noh, Jae Man; Song, Jae Seung; Zee, Sung Quun

    2005-01-01

    The latest research associated with the very high temperature gas-cooled reactor (VHTR) is focused on the verification of a system performance and safety under operating conditions for the VHTRs. As a part of those, an international gas-cooled reactor program initiated by IAEA is going on. The key objectives of this program are the validation of analytical computer codes and the evaluation of benchmark models for the projected and actual VHTRs. New reactor physics analysis procedure for the prismatic VHTR is under development by adopting the conventional two-step procedure. In this procedure, a few group constants are generated through the transport lattice calculations using the HELIOS code, and the core physics analysis is performed by the 3-dimensional nodal diffusion code MASTER. We evaluated the performance of the HELIOS/MASTER code package through the benchmark calculations related to the GT-MHR (Gas Turbine-Modular Helium Reactor) to dispose weapon plutonium. In parallel, MCNP is employed as a reference code to verify the results of the HELIOS/MASTER procedure

  3. Evaluation of CASMO-3 and HELIOS for Fuel Assembly Analysis from Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Hyung Jin; Song, Jae Seung; Lee, Chung Chan

    2007-05-15

    This report presents a study comparing deterministic lattice physics calculations with Monte Carlo calculations for LWR fuel pin and assembly problems. The study has focused on comparing results from the lattice physics code CASMO-3 and HELIOS against those from the continuous-energy Monte Carlo code McCARD. The comparisons include k{sub inf}, isotopic number densities, and pin power distributions. The CASMO-3 and HELIOS calculations for the k{sub inf}'s of the LWR fuel pin problems show good agreement with McCARD within 956pcm and 658pcm, respectively. For the assembly problems with Gadolinia burnable poison rods, the largest difference between the k{sub inf}'s is 1463pcm with CASMO-3 and 1141pcm with HELIOS. RMS errors for the pin power distributions of CASMO-3 and HELIOS are within 1.3% and 1.5%, respectively.

  4. HELIOS/DRAGON/NESTLE codes' simulation of void reactivity in a CANDU core

    International Nuclear Information System (INIS)

    Sarsour, H.N.; Rahnema, F.; Mosher, S.; Turinsky, P.J.; Serghiuta, D.; Marleau, G.; Courau, T.

    2002-01-01

    This paper presents results of simulation of void reactivity in a CANDU core using the NESTLE core simulator, cross sections from the HELIOS lattice physics code in conjunction with incremental cross sections from the DRAGON lattice physics code. First, a sub-region of a CANDU6 core is modeled using the NESTLE core simulator and predictions are contrasted with predictions by the MCNP Monte Carlo simulation code utilizing a continuous energy model. In addition, whole core modeling results are presented using the NESTLE finite difference method (FDM), NESTLE nodal method (NM) without assembly discontinuity factors (ADF), and NESTLE NM with ADF. The work presented in this paper has been performed as part of a project sponsored by the Canadian Nuclear Safety Commission (CNSC). The purpose of the project was to gather information and assess the accuracy of best estimate methods using calculational methods and codes developed independently from the CANDU industry. (author)

  5. Analysis of a small PWR core with the PARCS/Helios and PARCS/Serpent code systems

    International Nuclear Information System (INIS)

    Baiocco, G.; Petruzzi, A.; Bznuni, S.; Kozlowski, T.

    2017-01-01

    Highlights: • The consistency between Helios and Serpent few-group cross sections is shown. • The PARCS model is validated against a Monte Carlo 3D model. • The fission and capture rates are compared. • The influence of the spacer grids on the axial power distribution is shown. - Abstract: Lattice physics codes are primarily used to generate cross-section data for nodal codes. In this work the methodology of homogenized constant generation was applied to a small Pressurized Water Reactor (PWR) core, using the deterministic code Helios and the Monte Carlo code Serpent. Subsequently, a 3D analysis of the PWR core was performed with the nodal diffusion code PARCS using the two-group cross section data sets generated by Helios and Serpent. Moreover, a full 3D model of the PWR core was developed using Serpent in order to obtain a reference solution. Several parameters, such as k eff , axial and radial power, fission and capture rates were compared and found to be in good agreement.

  6. HELIOS-CR - A 1-D radiation-magnetohydrodynamics code with inline atomic kinetics modeling

    International Nuclear Information System (INIS)

    MacFarlane, J.J.; Golovkin, I.E.; Woodruff, P.R.

    2006-01-01

    HELIOS-CR is a user-oriented 1D radiation-magnetohydrodynamics code to simulate the dynamic evolution of laser-produced plasmas and z-pinch plasmas. It includes an in-line collisional-radiative (CR) model for computing non-LTE atomic level populations at each time step of the hydrodynamics simulation. HELIOS-CR has been designed for ease of use, and is well-suited for experimentalists, as well as graduate and undergraduate student researchers. The energy equations employed include models for laser energy deposition, radiation from external sources, and high-current discharges. Radiative transport can be calculated using either a multi-frequency flux-limited diffusion model, or a multi-frequency, multi-angle short characteristics model. HELIOS-CR supports the use of SESAME equation of state (EOS) tables, PROPACEOS EOS/multi-group opacity data tables, and non-LTE plasma properties computed using the inline CR modeling. Time-, space-, and frequency-dependent results from HELIOS-CR calculations are readily displayed with the HydroPLOT graphics tool. In addition, the results of HELIOS simulations can be post-processed using the SPECT3D Imaging and Spectral Analysis Suite to generate images and spectra that can be directly compared with experimental measurements. The HELIOS-CR package runs on Windows, Linux, and Mac OSX platforms, and includes online documentation. We will discuss the major features of HELIOS-CR, and present example results from simulations

  7. An Adaptation of the HELIOS/MASTER Code System to the Analysis of VHTR Cores

    International Nuclear Information System (INIS)

    Noh, Jae Man; Lee, Hyun Chul; Kim, Kang Seog; Kim, Yong Hee

    2006-01-01

    KAERI is developing a new computer code system for an analysis of VHTR cores based on the existing HELIOS/MASTER code system which was originally developed for a LWR core analysis. In the VHTR reactor physics, there are several unique neutronic characteristics that cannot be handled easily by the conventional computer code system applied for the LWR core analysis. Typical examples of such characteristics are a double heterogeneity problem due to the particulate fuels, the effects of a spectrum shift and a thermal up-scattering due to the graphite moderator, and a strong fuel/reflector interaction, etc. In order to facilitate an easy treatment of such characteristics, we developed some methodologies for the HELIOS/MASTER code system and tested their applicability to the VHTR core analysis

  8. HELIOS: An Open-source, GPU-accelerated Radiative Transfer Code for Self-consistent Exoplanetary Atmospheres

    Science.gov (United States)

    Malik, Matej; Grosheintz, Luc; Mendonça, João M.; Grimm, Simon L.; Lavie, Baptiste; Kitzmann, Daniel; Tsai, Shang-Min; Burrows, Adam; Kreidberg, Laura; Bedell, Megan; Bean, Jacob L.; Stevenson, Kevin B.; Heng, Kevin

    2017-02-01

    We present the open-source radiative transfer code named HELIOS, which is constructed for studying exoplanetary atmospheres. In its initial version, the model atmospheres of HELIOS are one-dimensional and plane-parallel, and the equation of radiative transfer is solved in the two-stream approximation with nonisotropic scattering. A small set of the main infrared absorbers is employed, computed with the opacity calculator HELIOS-K and combined using a correlated-k approximation. The molecular abundances originate from validated analytical formulae for equilibrium chemistry. We compare HELIOS with the work of Miller-Ricci & Fortney using a model of GJ 1214b, and perform several tests, where we find: model atmospheres with single-temperature layers struggle to converge to radiative equilibrium; k-distribution tables constructed with ≳ 0.01 cm-1 resolution in the opacity function (≲ {10}3 points per wavenumber bin) may result in errors ≳ 1%-10% in the synthetic spectra; and a diffusivity factor of 2 approximates well the exact radiative transfer solution in the limit of pure absorption. We construct “null-hypothesis” models (chemical equilibrium, radiative equilibrium, and solar elemental abundances) for six hot Jupiters. We find that the dayside emission spectra of HD 189733b and WASP-43b are consistent with the null hypothesis, while the latter consistently underpredicts the observed fluxes of WASP-8b, WASP-12b, WASP-14b, and WASP-33b. We demonstrate that our results are somewhat insensitive to the choice of stellar models (blackbody, Kurucz, or PHOENIX) and metallicity, but are strongly affected by higher carbon-to-oxygen ratios. The code is publicly available as part of the Exoclimes Simulation Platform (exoclime.net).

  9. Advances in HELIOS2 nuclear data library

    Energy Technology Data Exchange (ETDEWEB)

    Wemple, Charles [Studsvik Scandpower, Inc., Iadho Falls, ID (United States); Simeonov, Teodosi [Studsvik Scandpower, Inc., Waltham, MA (United States)

    2017-09-15

    The ongoing development of the HELIOS2 code system at Studsvik includes periodic updates of the nuclear data library. The library expansion includes an update of the cross section data source to ENDF/B-VIIR1, a significant expansion of the burnup chains, the addition of a more complete set of gamma production data, and the development of new resonance treatment options. The goal is to provide the capability for HELIOS2 to more accurately model a wider array of reactor applications and enhance interoperability with SNF, the Studsvik spent fuel analysis code. This paper will also provide a discussion of the nuclear data library benchmarking effort and an overview of other HELIOS2 development efforts.

  10. Comparative calculations and parametric studies using HELIOS and TRITON. Technical report; Vergleichsrechnungen und Parameterstudien mit HELIOS und TRITON. Technischer Bericht

    Energy Technology Data Exchange (ETDEWEB)

    Sommer, Fabian

    2017-05-15

    In the frame of the project ''evaluation and feasibility of a validation for computational codes for criticality and burnout calculations for the use in systems with boiling water reactor fuel'' the burnout code HELIOS for the calculation of inventories was used which allows due to the fast routines Monte-Carlo based sensitivity and uncertainty analyses. The calculated neutron multiplication factor for the HELIOS based calculations were compared with TRITON results.

  11. Estimation of reactor core calculation by HELIOS/MASTER at power generating condition through DeCART, whole-core transport code

    International Nuclear Information System (INIS)

    Kim, H. Y.; Joo, H. G.; Kim, K. S.; Kim, G. Y.; Jang, M. H.

    2003-01-01

    The reactivity and power distribution errors of the HELIOS/MASTER core calculation under power generating conditions are assessed using a whole core transport code DeCART. For this work, the cross section tablesets were generated for a medium sized PWR following the standard procedure and two group nodal core calculations were performed. The test cases include the HELIOS calculations for 2-D assemblies at constant thermal conditions, MASTER 3D assembly calculations at power generating conditions, and the core calculations at HZP, HFP, and an abnormal power conditions. In all these cases, the results of the DeCART code in which pinwise thermal feedback effects are incorporated are used as the reference. The core reactivity, assemblywise power distribution, axial power distribution, peaking factor, and thermal feedback effects are then compared. The comparison shows that the error of the HELIOS/MASTER system in the core reactivity, assembly wise power distribution, pin peaking factor are only 100∼300 pcm, 3%, and 2%, respectively. As far as the detailed pinwise power distribution is concerned, however, errors greater than 15% are observed

  12. QUADRIGA and KIRKE-front-end applications for HELIOS

    International Nuclear Information System (INIS)

    Havluj, F.; Vocka, R.

    2010-01-01

    QUADRIGA and KIRKE are two productivity tools for Studsvik's HELIOS lattice physics code. Their main idea is to shift repetitive and non-creative labor off from the user to a computer program. QUADRIGA is a complex tool for for input file preparation for HELIOS using smart and modular templates, physics models and user-friendly interface, including job launcher and load balancer. It allows non-HELIOS-savvy users to run lattice physics calculations on a computer cluster, offering user-friendly and simple GUI based on dynamically generated forms. The users do not have to know HELIOS and most of the job management is done automatically. KIRKE is a straightforward utility for converting CAD vector drawings into HELIOS geometry descriptions, including automated generation of the computational mesh. It is invaluable for complex, non-repetitive geometries, where HELIOS geometry input is extremely complicated and unimaginably tedious to construct. (Authors)

  13. HELIOS/DRAGON/NESTLE codes' simulation of the Gentilly-2 loss of class 4 power event

    International Nuclear Information System (INIS)

    Sarsour, H.N.; Turinsky, P.J.; Rahnema, F.; Mosher, S.; Serghiuta, D.; Marleau, G.; Courau, T.

    2002-01-01

    A loss of electrical power occurred at Gentilly-2 in September of 1995 while the station was operating at full power. There was an unexpectedly rapid core power increase initiated by the drainage of the zone controllers and accelerated by coolant boiling. The core transient was terminated by Shutdown System No 1 (SDS1) tripping when the out-of-core ion chambers exceeded the 10%/sec high rate of power increase trip setpoint at 1.29 sec. This resulted in the station automatically shutting down within 2 sec of event initiation. In the first 2 sec, 26 of the 58 SDS1 and SDS2 in-core flux detectors reached there overpower trip (ROPT) setpoints. The peak reactor power reached approximately 110%FP. Reference 1 presented detailed results of the simulations performed with coupled thermalhydraulics and 3D neutron kinetics codes, SOPHT-G2 and the CERBERUS module of RFSP, and the various adjustments of these codes and plant representation that were needed to obtain the neutronic response observed in 1995. The purposes of this paper are to contrast a simulation prediction of the peak prompt core thermal power transient versus experimental estimate, and to note the impact of spatial discretization approach utilized on the prompt core thermal power transient and the channel power distribution as a function of time. In addition, adequacy of the time-step sizes employed and sensitivity to core's transient thermal-hydraulics conditions are studied. The work presented in this paper has been performed as part of a project sponsored by the Canadian Nuclear Safety Commission (CNSC). The purpose of the project was to gather information and assess the accuracy of best estimate methods using calculation methods and codes developed independently from the CANDU industry. The simulation of the accident was completed using the NESTLE core simulator, employing cross sections generated by the HELIOS lattice physics code, and incremental cross sections generated by the DRAGON lattice physics code

  14. Verification of HELIOS-MASTER system through benchmark of critical experiments

    International Nuclear Information System (INIS)

    Kim, H. Y.; Kim, K. Y.; Cho, B. O.; Lee, C. C.; Zee, S. O.

    1999-01-01

    The HELIOS-MASTER code system is verified through the benchmark of the critical experiments that were performed by RRC 'Kurchatov Institute' with water-moderated hexagonally pitched lattices of highly enriched Uranium fuel rods (80w/o). We also used the same input by using the MCNP code that was described in the evaluation report, and compared our results with those of the evaluation report. HELIOS, developed by Scandpower A/S, is a two-dimensional transport program for the generation of group cross-sections, and MASTER, developed by KAERI, is a three-dimensional nuclear design and analysis code based on the two-group diffusion theory. It solves neutronics model with the AFEN (Analytic Function Expansion Nodal) method for hexagonal geometry. The results show that the HELIOS-MASTER code system is fast and accurate enough to be used as nuclear core analysis tool for hexagonal geometry

  15. HELIOS: Application for criticality limits assessment

    International Nuclear Information System (INIS)

    Simeonov, T.

    2011-01-01

    In the early years, after the discovery of fission, the criticality safety assessment and the established safety limits, have been mainly based on direct experiments. Later, following the advances in the theory, computational methods and computer hardware, theoretical methods have been elaborated to the level to become reliable assessment tools. The computer codes started replacing the experiments, while the experimental data became a valuable validation source for their models. An application of the two-dimensional transport theory code HELIOS for assessment of criticality limits is presented in this paper. The effect of the enrichment, the system dimensions, H/U5 ration and different reflectors were studied in heterogeneous and homogenized systems. Comparisons with published experimental data and evaluated safety limits are made here to demonstrate the range of HELIOS applicability and limitations. (Author)

  16. Comparison of scale/triton and helios burnup calculations for high burnup LWR fuel

    Energy Technology Data Exchange (ETDEWEB)

    Tittelbach, S.; Mispagel, T.; Phlippen, P.W. [WTI Wissenschaftlich-Technische Ingenieurberatung GmbH, Juelich (Germany)

    2009-07-01

    The presented analyses provide information about the suitability of the lattice burnup code HELIOS and the recently developed code SCALE/TRITON for the prediction of isotopic compositions of high burnup LWR fuel. The accurate prediction of the isotopic inventory of high burnt spent fuel is a prerequisite for safety analyses in and outside of the reactor core, safe loading of spent fuel into storage casks, design of next generation spent fuel casks and for any consideration of burnup credit. Depletion analyses are performed with both burnup codes for PWR and BWR fuel samples which were irradiated far beyond 50 GWd/t within the LWR-PROTEUS Phase II project. (orig.)

  17. Complex test of the C-PORCA 5.0 using HELIOS calculations

    International Nuclear Information System (INIS)

    Pos, I.; Nemes, I.; Patai-Szabo, S.

    2001-01-01

    Testing of C-PORCA 5.0 model using HELIOS calculation was performed. The basis of tests was a 30-degree core sector of WWER-440 containing differently burned fuel assemblies. Both with HELIOS and C-PORCA code one assembly burnup was calculated in infinite lattice assumption. Then differently burned fuel moved to a 30-degree core sector. Finally a 30-degree core was calculated and the HELIOS and C-PORCA results were compared. The comparison used in a validation procedure of C-PORCA model during introduction of higher enriched fuel to Paks units (Authors)

  18. VENUS-2 Benchmark Problem Analysis with HELIOS-1.9

    International Nuclear Information System (INIS)

    Jeong, Hyeon-Jun; Choe, Jiwon; Lee, Deokjung

    2014-01-01

    Since there are reliable results of benchmark data from the OECD/NEA report of the VENUS-2 MOX benchmark problem, by comparing benchmark results users can identify the credibility of code. In this paper, the solution of the VENUS-2 benchmark problem from HELIOS 1.9 using the ENDF/B-VI library(NJOY91.13) is compared with the result from HELIOS 1.7 with consideration of the MCNP-4B result as reference data. The comparison contains the results of pin cell calculation, assembly calculation, and core calculation. The eigenvalues from those are considered by comparing the results from other codes. In the case of UOX and MOX assemblies, the differences from the MCNP-4B results are about 10 pcm. However, there is some inaccuracy in baffle-reflector condition, and relatively large differences were found in the MOX-reflector assembly and core calculation. Although HELIOS 1.9 utilizes an inflow transport correction, it seems that it has a limited effect on the error in baffle-reflector condition

  19. Verification of HELIOS-MASTER system through benchmark of Halden boiling water reactor (HBWR)

    International Nuclear Information System (INIS)

    Kim, Ha Yong; Song, Jae Seung; Cho, Jin Young; Kim, Kang Seok; Lee, Chung Chan; Zee, Sung Quun

    2004-01-01

    To verify the HELIOS-MASTER computer code system for a nuclear design, we have been performed benchmark calculations for various reactor cores. The Halden reactor is a boiling, heavy water moderated reactor. At a full power of 18-20MWt, the moderator temperature is 240 .deg. C and the pressure is 33 bar. This study describes the verification of the HELIOS-MASTER computer code system for a nuclear design and the analysis of a hexagonal and D 2 O moderated core through a benchmark of the Halden reactor core. HELIOS, developed by Scandpower A/S, is a two-dimensional transport program for the generation of group cross-sections, and MASTER, developed by KAERI, is a three-dimensional nuclear design and analysis code based on the two-group diffusion theory. It solves the neutronics model with the TPEN (Triangle based Polynomial Expansion Nodal) method for a hexagonal geometry

  20. HELIOS calculations for UO2 lattice benchmarks

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1998-01-01

    Calculations for the ANS UO 2 lattice benchmark have been performed with the HELIOS lattice-physics code and six of its cross-section libraries. The results obtained from the different libraries permit conclusions to be drawn regarding the adequacy of the energy group structures and of the ENDF/B-VI evaluation for 238 U. Scandpower A/S, the developer of HELIOS, provided Los Alamos National Laboratory with six different cross section libraries. Three of the libraries were derived directly from Release 3 of ENDF/B-VI (ENDF/B-VI.3) and differ only in the number of groups (34, 89 or 190). The other three libraries are identical to the first three except for a modification to the cross sections for 238 U in the resonance range

  1. Comparison of Serpent and HELIOS-2 as applied for the PWR few-group cross section generation

    International Nuclear Information System (INIS)

    Fridman, E.; Leppaenen, J.; Wemple, C.

    2013-01-01

    This paper discusses recent modifications to the Serpent Monte Carlo code methodology and related to the calculation of few-group diffusion coefficients and reflector discontinuity factors The new methods were assessed in the following manner. First, few-group homogenized cross sections calculated by Serpent for a reference PWR core were compared with those generated 1 commercial deterministic lattice transport code HELIOS-2. Second, Serpent and HELIOS-2 fe group cross section sets were later employed by nodal diffusion code DYN3D for the modeling the reference PWR core. Finally, the nodal diffusion results obtained using the both cross section sets were compared with the full core Serpent Monte Carlo solution. The test calculations show that Serpent can calculate the parameters required for nodal analyses similar to conventional deterministic lattice codes. (authors)

  2. 3D heterogeneous transport calculations of CANDU fuel with EVENT/HELIOS

    International Nuclear Information System (INIS)

    Rahnema, F.; Mosher, S.; Ilas, D.; De Oliveira, C.; Eaton, M.; Stamm'ler, R.

    2002-01-01

    The applicability of the EVENT/HELIOS package to CANDU lattice cell analysis is studied in this paper. A 45-group cross section library is generated using the lattice depletion transport code HELIOS. This library is then used with the 3-D transport code EVENT to compute the pin fission densities and the multiplication constants for six configurations typical of a CANDU cell. The results are compared to those from MCNP with the same multigroup library. Differences of 70-150 pcm in multiplication constant and 0.08-0.95% in pin fission density are found for these cases. It is expected that refining the EVENT calculations can reduce these differences. This gives confidence in applying EVENT to transient analyses at the fuel pin level in a selected part of a CANDU core such as the limiting bundle during a loss of coolant accident (LOCA). (author)

  3. Modularization and Validation of FUN3D as a CREATE-AV Helios Near-Body Solver

    Science.gov (United States)

    Jain, Rohit; Biedron, Robert T.; Jones, William T.; Lee-Rausch, Elizabeth M.

    2016-01-01

    Under a recent collaborative effort between the US Army Aeroflightdynamics Directorate (AFDD) and NASA Langley, NASA's general unstructured CFD solver, FUN3D, was modularized as a CREATE-AV Helios near-body unstructured grid solver. The strategies adopted in Helios/FUN3D integration effort are described. A validation study of the new capability is performed for rotorcraft cases spanning hover prediction, airloads prediction, coupling with computational structural dynamics, counter-rotating dual-rotor configurations, and free-flight trim. The integration of FUN3D, along with the previously integrated NASA OVERFLOW solver, lays the ground for future interaction opportunities where capabilities of one component could be leveraged with those of others in a relatively seamless fashion within CREATE-AV Helios.

  4. Design of HELIOS beam diagnostics

    International Nuclear Information System (INIS)

    Seagrave, J.D.; Bigio, I.J.; Jackson, S.V.; Laird, A.M.

    1979-01-01

    Verification of satisfactory operation of the HELIOS eight-beam laser system requires measurement of many parameters of each beam on each shot. Fifty-joule samples of each of the eight 1250-J, subnanosecond 34-cm-diameter beams of the HELIOS system are diverted to a gallery of eight folded telescopes and beamsplit to provide diagnostic measurements. Total pulse energy, and prepulse and postlase energy of each beam are measured; pulse shape details and a wavelength spectrum of a selected beam from each shot are measured; and provision is made for retropulse measurement and optical quality monitoring. All data are recorded digitally in a local screen room, with control and communication through a fiberoptic link to the main HELIOS computer

  5. XS data recalculation with HELIOS-1.8 and statistical investigation of C-PORCA and GEPETTO codes results based on in-core measurements

    International Nuclear Information System (INIS)

    Szabo, Sandor Patai; Parko, Tamas; Pos, Istvan

    2005-01-01

    As a part of the power up rate process at the NPP PAKS some reactor physical model development and testing were fulfilled. The model development mainly focussed on the more flexible handling of assemblies with different initial material compositions in axial direction and the renewing of few group XS data storage. Parallel with this modification all of the few group XS data were recalculated by the newest HELIOS version.To satisfy the correct and accurate off-line and on-line reactor physical analysis of reactor cores a comprehensive investigation of the relevant codes has been done. During this process the accuracy of applied models was determined and their appropriateness was also demonstrated. The paper shows the main features of modifications and code developments and basic results of tests (Authors)

  6. A Single and Comprehensive Helios Data Archive

    Science.gov (United States)

    Salem, C. S.

    2017-12-01

    Helios 1 & 2 rank amoung the most important missions in Heliophysics, and the more-than 11 years of data returned by its spacecraft remain of paramount interests to researchers. Their unique trajectories which brought them closer to the Sun than any spaceccraft before or since, enabled their diverse suite of in-situ instruments to return measurements of unprecedented scientific richness. There is however no comprehensive public repository of all Helios in-situ data. Currently, most of the highest resolution data can be obtained from a variety of places, although highly processed and with very little documentation, especially on calibration. Analysis of this data set requires overcoming a number of technical and instrumental issues, knowledge and expertise of which is only possessed by the original PI's of the Helios experiments. We present here a work funded by NASA of aggregating, analyzing, evaluating, documenting and archiving the available Helios 1 and 2 in-situ data. This work at the UC Berkeley Space Sciences Laboratory is being undertaken in close collaboration with colleagues at the University of Koln, at the University of Kiel, at the Imperial College in London and at the Paris Observatory. A careful, detailed, analysis of the Helios fluxgate and search coil magnetic field data as well as plasma data has revealed numerous issues and problems with the available, processed, datasets, that we are still working to solve. We anticipate this comprehensive single archive of all Helios in-situ data, beyond its inherent scientific value, will also be an invaluable asset to the both the Solar Probe Plus and Solar Orbiter missions.

  7. Helios: Understanding Solar Evolution Through Text Analytics

    Energy Technology Data Exchange (ETDEWEB)

    Randazzese, Lucien [SRI International, Menlo Park, CA (United States)

    2016-12-02

    This proof-of-concept project focused on developing, testing, and validating a range of bibliometric, text analytic, and machine-learning based methods to explore the evolution of three photovoltaic (PV) technologies: Cadmium Telluride (CdTe), Dye-Sensitized solar cells (DSSC), and Multi-junction solar cells. The analytical approach to the work was inspired by previous work by the same team to measure and predict the scientific prominence of terms and entities within specific research domains. The goal was to create tools that could assist domain-knowledgeable analysts in investigating the history and path of technological developments in general, with a focus on analyzing step-function changes in performance, or “breakthroughs,” in particular. The text-analytics platform developed during this project was dubbed Helios. The project relied on computational methods for analyzing large corpora of technical documents. For this project we ingested technical documents from the following sources into Helios: Thomson Scientific Web of Science (papers), the U.S. Patent & Trademark Office (patents), the U.S. Department of Energy (technical documents), the U.S. National Science Foundation (project funding summaries), and a hand curated set of full-text documents from Thomson Scientific and other sources.

  8. Solar wind plasma structure near a 'HELIOS-Perihelion'

    International Nuclear Information System (INIS)

    Kikuchi, H.

    1979-01-01

    The purpose of this paper is to introduce a couple of preliminary but important results obtained from HELIOS observation concerning solar wind plasma structure near a ''HELIOS-Perihelion'' among the data analyses in progress, partly in relation to laboratory plasma. Idealized profiles of the bulk velocity, density and temperature of solar wind near 0.3 AU as deduced from HELIOS A data and correlated K-coronal contours were obtained. During 1974 - 1976, the sun was in the period of declining cycle, and the coronal holes expanded to lower latitudes from northern and southern holes. There is general tendency that the northern coronal hole is somewhat larger than the southern coronal hole. In regards to solar wind velocity, there are two fast stream regions with velocity as high as 800 Km/sec. An electron spectrum measured near a HELIOS-Perihelion (0.3 AU) approximately in the solar direction is shown. Three regions can be distinguished in velocity distribution. The density contours of solar wind electrons in velocity space exhibit a narrow beam of electrons in the magnetic field direction close to the plane of observation. (Kato, T.)

  9. Helios movable Hartmann ball

    International Nuclear Information System (INIS)

    Tucker, H.E.; Day, R.D.; Hedges, R.O.; Hanlon, J.A.; Kortegaard, B.L.

    1981-01-01

    The MHB has been in operation for about nine months and has been performing quite well. It has provided the Helios laser fusion facility with additional target illumination flexibility so that many additional parameters can be investigated in the realm of target implosion physics

  10. Helios, and not FoxP3, is the marker of activated Tregs expressing GARP/LAP.

    Science.gov (United States)

    Elkord, Eyad; Abd Al Samid, May; Chaudhary, Belal

    2015-08-21

    Regulatory T cells (Tregs) are key players of immune regulation/dysregulation both in physiological and pathophysiological settings. Despite significant advances in understanding Treg function, there is still a pressing need to define reliable and specific markers that can distinguish different Treg subpopulations. Herein we show for the first time that markers of activated Tregs [latency associated peptide (LAP) and glycoprotein A repetitions predominant (GARP, or LRRC32)] are expressed on CD4+FoxP3- T cells expressing Helios (FoxP3-Helios+) in the steady state. Following TCR activation, GARP/LAP are up-regulated on CD4+Helios+ T cells regardless of FoxP3 expression (FoxP3+/-Helios+). We show that CD4+GARP+/-LAP+ Tregs make IL-10 immunosuppressive cytokine but not IFN-γ effector cytokine. Further characterization of FoxP3/Helios subpopulations showed that FoxP3+Helios+ Tregs proliferate in vitro significantly less than FoxP3+Helios- Tregs upon TCR stimulation. Unlike FoxP3+Helios- Tregs, FoxP3+Helios+ Tregs secrete IL-10 but not IFN-γ or IL-2, confirming they are bona fide Tregs with immunosuppressive characteristics. Taken together, Helios, and not FoxP3, is the marker of activated Tregs expressing GARP/LAP, and FoxP3+Helios+ Tregs have more suppressive characteristics, compared with FoxP3+Helios- Tregs. Our work implies that therapeutic modalities for treating autoimmune and inflammatory diseases, allergies and graft rejection should be designed to induce and/or expand FoxP3+Helios+ Tregs, while therapies against cancers or infectious diseases should avoid such expansion/induction.

  11. The application of the PARCS neutronics code to the Atucha-I and Atucha-II NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Ward, Andrew; Collins, Ben; Xu, Yunlin; Downar, Thomas [Purdue University, West Lafayette, IN (United States); Madariaga, Marcelo [Autoridad Nuclear Regulatoria, Buenos Aires (Argentina)

    2008-07-01

    In order to analyze Central Nuclear Atucha II (CNA-II) with coupled RELAP5/PARCS, extensive benchmarking of the neutronics codes HELIOS and PARCS was completed. This benchmarking was performed using a range of test problems designed in collaboration with NA-SA. HELIOS has been previously used to model Candu systems, but the results were validated for this case as well. The validation of both HELIOS and PARCS was performed primarily by comparisons to MCNP results for the same problems. Though originally designed to model light water systems, the capability of the PARCS was validated for predicting the performance of a Pressurized Heavy Water Reactor. The other noteworthy issue was the control rods. Because the insertion of the rods is oblique, a special routine was added to PARCS to treat this effect. Lattice level and Core level calculations were compared to the corresponding NA-SA codes WIMS and PUMA. In all cases there was good agreement in the results which provided confidence that the neutronics methods and the core neutronics modelling would not be a significant source of error in coupled RELAP5/PARCS calculations. (authors)

  12. Generation of a library of two-group diffusion parameters for SPPS-1,6 by HELIOS

    International Nuclear Information System (INIS)

    Petkov, P.T.; Haralampieva, C.V.; Simeonov, T.; Stojanova, I.; Kamenov, K.

    2000-01-01

    The two-group three-dimensional nodal diffusion code SPPS-1.6 has been used for many years for steady-state neutronics calculations of the WWER-440 reactors at Kozloduy NPP. The old library of two-group diffusion parameters for SPPS-1.6 has been generated by WIMSD4 with a nuclear data library compiled from three different libraries. The current paper presents our experience in generating a new library for SPPS-1.6 by the HELIOS lattice code. The accuracy of the current-coupling collision probability (CCCP) method in calculating a single WWER-440 assembly has been studied first. Among all possible angular discretization of the interface partial currents, called coupling orders, only coupling order 3 is suitable for hexagonal cells. Dividing each cell side into 3 segments an accuracy of 100 pcm has been achieved. The accuracy in calculating the absorber problem was estimated at 1%, which means about 10% error in the control assemblies efficiency. The accuracy for small core-reflector problems is 1% as well. The general conclusion is that HELIOS is accurate enough for assembly calculations, but inadequate for absorber and core-reflector problems. (Authors)

  13. Alternative Splice Variants Modulates Dominant-Negative Function of Helios in T-Cell Leukemia.

    Directory of Open Access Journals (Sweden)

    Shaorong Zhao

    Full Text Available The molecular defects which lead to multistep incidences of human T-cell leukemia have yet to be identified. The DNA-binding protein Helios (known as IKZF2, a member of the Ikaros family of Krüppel-like zinc-finger proteins, functions pivotally in T-cell differentiation and activation. In this study, we identify three novel short Helios splice variants which are T-cell leukemic specific, and demonstrate their dominant-negative function. We then test the cellular localization of distinct Helios isoforms, as well as their capability to form heterodimer with Ikaros, and the association with complexes comprising histone deacetylase (HDAC. In addition, the ectopic expression of T-cell leukemic Helios isoforms interferes with T-cell proliferation and apoptosis. The gene expression profiling and pathway analysis indicated the enrichment of signaling pathways essential for gene expression, translation, cell cycle checkpoint, and response to DNA damage stimulus. These data indicate the molecular function of Helios to be involved in the leukemogenesis and phenotype of T-cell leukemia, and also reveal Helios deregulation as a novel marker for T-cell leukemia.

  14. On the influence of spatial discretization on cross section preparation with HELIOS 1.9

    International Nuclear Information System (INIS)

    Merk, B.; Koch, R.

    2008-01-01

    The aim of many reactor calculations is the determination of the neutron flux and the nuclear power distribution. These distributions are in general calculated by solving the space and energy dependent static transport equation. Thus, a reliable computation of the neutron flux distribution within the reactor core would require the solution of the space-, energy- and angle-dependent neutron transport equation for the full nuclear reactor core. It is not yet feasible to solve exactly the neutron transport equation for realistic reactor core geometries in detail in practical use. Thus deterministic reactor calculations are split into the cell and lattice calculation based on static transport and the core simulation based on nodal codes. The cell and lattice calculations are mostly based on multi group transport calculations within two dimensions considering unstructured meshes. The resulting neutron flux is used for the preparation of few group cross sections like they are used in the nodal 3D full core simulation codes. One commercial standard product is the Studsvik Scandpower code system HELIOS 1.9. ''The transport method of HELIOS is called the CCCP method, because it is based on current coupling and collision probabilities (first-flight probabilities).. the system to be calculated consists of space elements that are coupled with each other and with the boundaries by interface currents, while the properties of each space element - its responses to sources and in-currents - are obtained from collision probabilities'' [ 1]. The applied collision probabilities method is based on the flat flux approximation. ''We assume that particles are emitted isotropically and uniformly within each discrete volume. This is known as the flat flux approximation. The flat flux approximation places a restriction on the mesh: if the flux varies rapidly within a region, the mesh should be refined sufficiently to ensure that the flux is well approximated by a piecewise - constant

  15. Comparison of thermo-hydraulic analysis with measurements for HELIOS. The scaled integral test loop for PEACER

    International Nuclear Information System (INIS)

    Cho, Jae Hyun; Lim, Jun; Kim, Ji Hak; Hwang, Il Soon

    2009-01-01

    A scaled-down Lead-Bismuth Eutectic circulating integral test loop named as HELIOS (Heavy Eutectic liquid metal Loop for Integral test of Operability and Safety of PEACER) has been employed to characterize steady-state isothermal forced circulation behavior and non-isothermal natural circulation capability of the lead and lead-alloy cooled advanced nuclear energy systems (LACANES). In this time, thermal-hydraulic experiments have been carried out using HELIOS following rigorous calibration campaigns on sensors for temperature and pressure, especially isothermal steady-state forced convection using by the pump. The isothermal steady-state forced convection test was performed to obtain the pressure loss information including friction loss coefficients and form loss coefficients. Then its data were compared with multi-approaching analysis including hand calculation results and computer simulation code results. (MARS-LBE, CFX). We report the results of comparisons between the analysis and measurements together. (author)

  16. Cost-Effective Additive Manufacturing in Space: HELIOS Technology Challenge Guide

    Science.gov (United States)

    DeVieneni, Alayna; Velez, Carlos Andres; Benjamin, David; Hollenbeck, Jay

    2012-01-01

    Welcome to the HELIOS Technology Challenge Guide. This document is intended to serve as a general road map for participants of the HELIOS Technology Challenge [HTC] Program and the associated inaugural challenge: HTC-01: Cost-Effective Additive Manufacturing in Space. Please note that this guide is not a rule book and is not meant to hinder the development of innovative ideas. Its primary goal is to highlight the objectives of the HTC-01 Challenge and to describe possible solution routes and pitfalls that such technology may encounter in space. Please also note that participants wishing to demonstrate any hardware developed under this program during any future HELIOS Technology Challenge showcase event(s) may be subject to event regulations to be published separately at a later date.

  17. Adaption of the PARCS Code for Core Design Audit Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyong Chol; Lee, Young Jin; Uhm, Jae Beop; Kim, Hyunjik [Nuclear Safety Evaluation, Daejeon (Korea, Republic of); Jeong, Hun Young; Ahn, Seunghoon; Woo, Swengwoong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2013-05-15

    The eigenvalue calculation also includes quasi-static core depletion analyses. PARCS has implemented variety of features and has been qualified as a regulatory audit code in conjunction with other NRC thermal-hydraulic codes such as TRACE or RELAP5. In this study, as an adaptation effort for audit applications, PARCS is applied for an audit analysis of a reload core design. The lattice physics code HELIOS is used for cross section generation. PARCS-HELIOS code system has been established as a core analysis tool. Calculation results have been compared on a wide spectrum of calculations such as power distribution, critical soluble boron concentration, and rod worth. A reasonable agreement between the audit calculation and the reference results has been found.

  18. VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, RJ

    2001-02-02

    The Task Force on Reactor-Based Plutonium Disposition, now an Expert Group, was set up through the Organization for Economic Cooperation and Development/Nuclear Energy Agency to facilitate technical assessments of burning weapons-grade plutonium mixed-oxide (MOX) fuel in U.S. pressurized-water reactors and Russian VVER nuclear reactors. More than ten countries participated to advance the work of the Task Force in a major initiative, which was a blind benchmark study to compare code benchmark calculations against experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At the Oak Ridge National Laboratory, the HELIOS-1.4 code was used to perform a comprehensive study of pin-cell and core calculations for the VENUS-2 benchmark.

  19. Real depletion in nodal diffusion codes

    International Nuclear Information System (INIS)

    Petkov, P.T.

    2002-01-01

    The fuel depletion is described by more than one hundred fuel isotopes in the advanced lattice codes like HELIOS, but only a few fuel isotopes are accounted for even in the advanced steady-state diffusion codes. The general assumption that the number densities of the majority of the fuel isotopes depend only on the fuel burnup is seriously in error if high burnup is considered. The real depletion conditions in the reactor core differ from the asymptotic ones at the stage of lattice depletion calculations. This study reveals which fuel isotopes should be explicitly accounted for in the diffusion codes in order to predict adequately the real depletion effects in the core. A somewhat strange conclusion is that if the real number densities of the main fissionable isotopes are not explicitly accounted for in the diffusion code, then Sm-149 should not be accounted for either, because the net error in k-inf is smaller (Authors)

  20. HELIOS: A high intensity chopper spectrometer at LANSCE

    International Nuclear Information System (INIS)

    Mason, T.E.; Broholm, C.; Fultz, B.

    1998-01-01

    A proposal to construct a high intensity chopper spectrometer at LANSCE as part of the SPSS upgrade project is discussed. HELIOS will be optimized for science requiring high sensitivity neutron spectroscopy. This includes studies of phonon density of states in small polycrystalline samples, magnetic excitations in quantum magnets and highly correlated electron systems, as well as parametric studies (as a function of pressure, temperature, or magnetic field) of S(Q,ω). By employing a compact design together with the use of supermirror guide in the incident flight path the neutron flux at HELIOS will be significantly higher than any other comparable instrument now operating

  1. HELIOS: A high intensity chopper spectrometer at LANSCE

    Energy Technology Data Exchange (ETDEWEB)

    Mason, T.E. [Oak Ridge National Lab., TN (United States); Broholm, C. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Physics and Astronomy; Fultz, B. [California Inst. of Tech., Pasadena, CA (United States). Dept. of Materials Science] [and others

    1998-12-31

    A proposal to construct a high intensity chopper spectrometer at LANSCE as part of the SPSS upgrade project is discussed. HELIOS will be optimized for science requiring high sensitivity neutron spectroscopy. This includes studies of phonon density of states in small polycrystalline samples, magnetic excitations in quantum magnets and highly correlated electron systems, as well as parametric studies (as a function of pressure, temperature, or magnetic field) of S(Q,{omega}). By employing a compact design together with the use of supermirror guide in the incident flight path the neutron flux at HELIOS will be significantly higher than any other comparable instrument now operating.

  2. On the use of the Serpent Monte Carlo code for few-group cross section generation

    International Nuclear Information System (INIS)

    Fridman, E.; Leppaenen, J.

    2011-01-01

    Research highlights: → B1 methodology was used for generation of leakage-corrected few-group cross sections in the Serpent Monte-Carlo code. → Few-group constants generated by Serpent were compared with those calculated by Helios deterministic lattice transport code. → 3D analysis of a PWR core was performed by a nodal diffusion code DYN3D employing two-group cross section sets generated by Serpent and Helios. → An excellent agreement in the results of 3D core calculations obtained with Helios and Serpent generated cross-section libraries was observed. - Abstract: Serpent is a recently developed 3D continuous-energy Monte Carlo (MC) reactor physics burnup calculation code. Serpent is specifically designed for lattice physics applications including generation of homogenized few-group constants for full-core core simulators. Currently in Serpent, the few-group constants are obtained from the infinite-lattice calculations with zero neutron current at the outer boundary. In this study, in order to account for the non-physical infinite-lattice approximation, B1 methodology, routinely used by deterministic lattice transport codes, was considered for generation of leakage-corrected few-group cross sections in the Serpent code. A preliminary assessment of the applicability of the B1 methodology for generation of few-group constants in the Serpent code was carried out according to the following steps. Initially, the two-group constants generated by Serpent were compared with those calculated by Helios deterministic lattice transport code. Then, a 3D analysis of a Pressurized Water Reactor (PWR) core was performed by the nodal diffusion code DYN3D employing two-group cross section sets generated by Serpent and Helios. At this stage thermal-hydraulic (T-H) feedback was neglected. The DYN3D results were compared with those obtained from the 3D full core Serpent MC calculations. Finally, the full core DYN3D calculations were repeated taking into account T-H feedback and

  3. Evolución de estrellas de Helio

    Science.gov (United States)

    Panei, J. A.; Benvenuto, O. G.; Althaus, L. G.

    Podríamos identificar a las estrellas de helio con estrellas Wolf-Rayet (WR) que han perdido su envoltura rica en hidrógeno ya sea porque las mismas pertenecen a sistemas binarios o a través de fuertes vientos estelares. Las WR representan una etapa evolucionaria normal de las estrellas masivas, cuya pérdida de masa es >= 3 × 10-5Msolar/yr y la cual es sufrida por la estrella en un tiempo de escala mucho menor que el tiempo en que se produce la quema del He. Esto garantiza la ``homogeneidad'' de las estrellas de helio para nuestros modelos. Este tipo de estrellas serían posibles progenitores de SN tipo Ib y Ic. Aquí presentamos un estudio sobre la evolución de estrellas de helio a partir de la secuencia principal de helio, pasando por el flash de carbono, hasta agotarlo en la región central; como así también la dependencia con la variable masa y con la pérdida de la misma para distintos tipos de masas. Para tal fin hemos utilizado un código de evolución estelar completo que realiza todas las reacciones de Fowler en forma simultánea. También se han tenido en cuenta los procesos de mezcla convectiva, los principales mecanismos de emisión de neutrinos y los efectos de la pérdida de masa. Las opacidades utilizadas fueron las de Rogers & Iglesias (1992). Debido a la pérdida de masa en este tipo de estrellas, hemos encontrado que los perfiles convectivos, la composición química, las condiciones centrales de temperatura y presión, luminosidad y temperatura efectiva dependen en forma esencial de la velocidad de pérdida de masa adoptada, lo que tendría profundas implicaciones en la evolución posterior de estos objetos.

  4. The Philosophy of User Interfaces in HELIO and the Importance of CASSIS

    Science.gov (United States)

    Bonnin, X.; Aboudarham, J.; Renié, C.; Csillaghy, A.; Messerotti, M.; Bentley, R. D.

    2012-09-01

    HELIO is a European project funded under FP7 (Project No. 238969). One of its goals as a Heliospheric Virtual Observatory is to provide an easy access to many datasets scattered all over the world, in the fields of Solar physics, Heliophysics, and Planetary magnetospheres. The efficiency of such a tool is very much related to the quality of the user interface. HELIO infrastructure is based on a Service Oriented Architecture (SOA), regrouping a network of standalone components, which allows four main types of interfaces: - HELIO Front End (HFE) is a browser-based user interface, which offers a centralized access to the HELIO main functionalities. Especially, it provides the possibility to reach data directly, or to refine selection by determination of observing characteristics, such as which instrument was observing at that time, which instrument was at this location, etc. - Many services/components provide their own standalone graphical user interface. While one can directly access individually each of these interfaces, they can also be connected together. - Most services also provide direct access for any tools through a public interface. A small java library, called Java API, simplifies this access by providing client stubs for services and shields the user from security, discovery and failover issues. - Workflows capabilities are available in HELIO, allowing complex combination of queries over several services. We want the user to be able to navigate easily, at his needs, through the various interfaces, and possibly use a specific one in order to make much-dedicated queries. We will also emphasize the importance of the CASSIS project (Coordination Action for the integration of Solar System Infrastructure and Science) in encouraging the interoperability necessary to undertake scientific studies that span disciplinary boundaries. If related projects follow the guidelines being developed by CASSIS then using external resources with HELIO will be greatly simplified.

  5. VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4 - Revised Report

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, RJ

    2001-06-01

    The Task Force on Reactor-Based Plutonium Disposition (TFRPD) was formed by the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) to study reactor physics, fuel performance, and fuel cycle issues related to the disposition of weapons-grade (WG) plutonium as mixed-oxide (MOX) reactor fuel. To advance the goals of the TFRPD, 10 countries and 12 institutions participated in a major TFRPD activity: a blind benchmark study to compare code calculations to experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At Oak Ridge National Laboratory, the HELIOS-1.4 code system was used to perform the comprehensive study of pin-cell and MOX core calculations for the VENUS-2 MOX core benchmark study.

  6. Development of an automated core model for nuclear reactors

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1998-01-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of this project was to develop an automated package of computer codes that can model the steady-state behavior of nuclear-reactor cores of various designs. As an added benefit, data produced for steady-state analysis also can be used as input to the TRAC transient-analysis code for subsequent safety analysis of the reactor at any point in its operating lifetime. The basic capability to perform steady-state reactor-core analysis already existed in the combination of the HELIOS lattice-physics code and the NESTLE advanced nodal code. In this project, the automated package was completed by (1) obtaining cross-section libraries for HELIOS, (2) validating HELIOS by comparing its predictions to results from critical experiments and from the MCNP Monte Carlo code, (3) validating NESTLE by comparing its predictions to results from numerical benchmarks and to measured data from operating reactors, and (4) developing a linkage code to transform HELIOS output into NESTLE input

  7. Helios expression in regulatory T cells promotes immunosuppression, angiogenesis and the growth of leukemia cells in pediatric acute lymphoblastic leukemia.

    Science.gov (United States)

    Li, Xue; Li, Dong; Huang, Xiaoyang; Zhou, Panpan; Shi, Qing; Zhang, Bing; Ju, Xiuli

    2018-04-01

    Regulatory T cells (Tregs) characterized by the transcription factor forkhead box P3 (FoxP3) are crucial for maintaining immune tolerance and preventing autoimmunity. However, FoxP3 does not function alone and Helios is considered a potential candidate for defining Treg subsets. In this study, we investigated the expression and function of Helios for identifying Tregs in childhood precursor B-cell acute lymphoblastic leukemia (pre-B ALL). Our results demonstrated that patients with pre-B ALL had a higher percentage of Helios + FoxP3 + CD4 + Tregs. And there was a positive correlation between the expression of Helios and the suppressive function of Tregs, the risk gradation of ALL. Helios in combination with CD4 and FoxP3 may be an effective way to detect functional Tregs in pre-B ALL by promoting the secretion of transforming growth factor (TGF)-β1. Furthermore, Helios + Tregs could regulate angiogenesis in the BM niche of pre-B ALL via the VEGFA/VEGFR2 pathway. We also found Helios + Tregs decreased apoptosis rate of nalm-6 cells by up-regulating the expression of anti-apoptosis protein Bcl-2. In summary, these data strongly imply the physiological importance of Helios expression in Tregs, and suggest that the manipulation of Helios may serve as a novel strategy for cancer immunotherapy. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Comportamiento del Helio en estrellas químicamente peculiares

    Science.gov (United States)

    Malaroda, S. M.; López García, Z.; Leone, F.; Catalano, F.

    Las estrellas químicamente peculiares (CP) se caracterizan por tener deficiencias y sobreabundancias de algunos elementos químicos de hasta 106 veces la abundancia solar. Además presentan variaciones en las líneas espectrales. Se piensa que ello se debe a que los campos magnéticos presentes en este tipo de estrellas son principalmente dipolares, con un eje de simetría diferente del eje de rotación. La distribución de los elementos sobreabundantes y deficientes no es homogénea sobre la superficie estelar y las variaciones observadas serían una consecuencia directa de la rotación estelar. Entre los elementos con abundancia anómala se encuentra el Helio, cuyas líneas tienen intensidades que no son consistentes con una abundancia normal, que no puede ser determinada del modo usual, o sea, considerando una atmósfera con composición solar. Con el fin de determinar la abundancia de este elemento, se inició un estudio de estrellas anómalas de Helio, Hew y He strong. Además se determinarán las abundancias de otros elementos anómalos como ser el Si, Cr, Mg, Mn y Fe. Las mismas se determinan del modo tradicional, o sea: a) medida de los anchos equivalentes de las líneas de los distintos elementos analizados; b) adopción de la temperatura efectiva, gravedad y abundancia del Helio; c) cálculo del modelo de atmósfera d) comparación con las observaciones y reinicio de un proceso iterativo hasta lograr un acuerdo entre todos los parámetros analizados. Las observaciones se llevaron a cabo en el Complejo Astronómico El Leoncito. Se observaron setenta y ocho estrellas anómalas de Helio. En este momento se está procediendo a calcular las abundancias correspondientes a los distintos elementos químicos. Para ello se hace uso de los modelos de Kurucz, ATLAS9. Los cálculos NLTE de las líneas de Helio se llevan a cabo con el programa MULTI y se compararán con los realizados con el programa WIDTH9 de Kurucz (LTE), con el objeto de resaltar la importancia de

  9. HELIOS2: Benchmarking against experiments for hexagonal and square lattices

    International Nuclear Information System (INIS)

    Simeonov, T.

    2009-01-01

    HELIOS2, is a 2D transport theory program for fuel burnup and gamma-flux calculation. It solves the neutron and gamma transport equations in a general, two-dimensional geometry bounded by a polygon of straight lines. The applied transport solver may be chosen between: The Method of Collision Probabilities (CP) and The Method of Characteristics(MoC). The former is well known for its successful application for preparation of cross section data banks for 3D simulators for all types lattices for WWERs, PWRs, BWRs, AGRs, RBMK and CANDU reactors. The later, MoC, helps in the areas where the requirements of CP for computational power become too large of practical application. The application of HELIOS2 and The Method of Characteristics for some large from calculation point of view benchmarks is presented in this paper. The analysis combines comparisons to measured data from the Hungarian ZR-6 reactor and JAERI facility of Tank type Critical Assembly (TCA) to verify and validate HELIOS2 and MOC for WWER assembly imitators; configurations with different absorber types- ZrB 2 , B 4 C, Eu 2 O 3 and Gd 2 O 3 ; and critical configurations with stainless steel in the reflector. Core eigenvalues and reaction rates are compared. With the account for the uncertainties the results are generally excellent. Special place in this paper is given to the effect of Iron-made radial reflector. Comparisons to measurements from TIC and TCA for stainless steel and Iron reflected cores are presented. The calculated by HELIOS-2 reactivity effect is in very good agreement with the measurements. (author)

  10. HELIOS2: Benchmarking Against Experiments for Hexagonal and Square Lattices

    International Nuclear Information System (INIS)

    Simeonov, T.

    2009-01-01

    HELIOS2, is a 2D transport theory program for fuel burnup and gamma-flux calculation. It solves the neutron and gamma transport equations in a general, two-dimensional geometry bounded by a polygon of straight lines. The applied transport solver may be chosen between: The Method of Collision Probabilities and The Method of Characteristics. The former is well known for its successful application for preparation of cross section data banks for 3D simulators for all types lattices for WWER's, PWR's, BWR's, AGR's, RBMK and CANDU reactors. The later, method of characteristics, helps in the areas where the requirements of collision probability for computational power become too large of practical application. The application of HELIOS2 and The method of characteristics for some large from calculation point of view benchmarks is presented in this paper. The analysis combines comparisons to measured data from the Hungarian ZR-6 reactor and JAERI's facility of tanktype critical assembly to verify and validate HELIOS2 and method of characteristics for WWER assembly imitators; configurations with different absorber types-ZrB2, B4C, Eu2O3 and Gd2O3; and critical configurations with stainless steel in the reflector. Core eigenvalues and reaction rates are compared. With the account for the uncertainties the results are generally excellent. Special place in this paper is given to the effect of Iron-made radial reflector. Comparisons to measurements from The Temporary International Collective and tanktype critical assembly for stainless steel and Iron reflected cores are presented. The calculated by HELIOS-2 reactivity effect is in very good agreement with the measurements. (Authors)

  11. Lay out, test verification and in orbit performance of HELIOS a temperature control system

    Science.gov (United States)

    Brungs, W.

    1975-01-01

    HELIOS temperature control system is described. The main design features and the impact of interactions between experiment, spacecraft system, and temperature control system requirements on the design are discussed. The major limitations of the thermal design regarding a closer sun approach are given and related to test experience and performance data obtained in orbit. Finally the validity of the test results achieved with prototype and flight spacecraft is evaluated by comparison between test data, orbit temperature predictions and flight data.

  12. Solar-wind predictions for the Parker Solar Probe orbit. Near-Sun extrapolations derived from an empirical solar-wind model based on Helios and OMNI observations

    Science.gov (United States)

    Venzmer, M. S.; Bothmer, V.

    2018-03-01

    Context. The Parker Solar Probe (PSP; formerly Solar Probe Plus) mission will be humanitys first in situ exploration of the solar corona with closest perihelia at 9.86 solar radii (R⊙) distance to the Sun. It will help answer hitherto unresolved questions on the heating of the solar corona and the source and acceleration of the solar wind and solar energetic particles. The scope of this study is to model the solar-wind environment for PSPs unprecedented distances in its prime mission phase during the years 2018 to 2025. The study is performed within the Coronagraphic German And US SolarProbePlus Survey (CGAUSS) which is the German contribution to the PSP mission as part of the Wide-field Imager for Solar PRobe. Aim. We present an empirical solar-wind model for the inner heliosphere which is derived from OMNI and Helios data. The German-US space probes Helios 1 and Helios 2 flew in the 1970s and observed solar wind in the ecliptic within heliocentric distances of 0.29 au to 0.98 au. The OMNI database consists of multi-spacecraft intercalibrated in situ data obtained near 1 au over more than five solar cycles. The international sunspot number (SSN) and its predictions are used to derive dependencies of the major solar-wind parameters on solar activity and to forecast their properties for the PSP mission. Methods: The frequency distributions for the solar-wind key parameters, magnetic field strength, proton velocity, density, and temperature, are represented by lognormal functions. In addition, we consider the velocity distributions bi-componental shape, consisting of a slower and a faster part. Functional relations to solar activity are compiled with use of the OMNI data by correlating and fitting the frequency distributions with the SSN. Further, based on the combined data set from both Helios probes, the parameters frequency distributions are fitted with respect to solar distance to obtain power law dependencies. Thus an empirical solar-wind model for the inner

  13. Refuelling design and core calculations at NPP Paks: codes and methods

    International Nuclear Information System (INIS)

    Pos, I.; Nemes, I.; Javor, E.; Korpas, L.; Szecsenyi, Z.; Patai-Szabo, S.

    2001-01-01

    This article gives a brief review of the computer codes used in the fuel management practice at NPP Paks. The code package consist of the HELIOS neutron and gamma transport code for preparation of few-group cross section library, the CERBER code to determine the optimal core loading patterns and the C-PORCA code for detailed reactor physical analysis of different reactor states. The last two programs have been developed at the NPP Paks. HELIOS gives sturdy basis for our neutron physical calculation, CERBER and C-PORCA programs have been enhanced in great extent for last years. Methods and models have become more detailed and accurate as regards the calculated parameters and space resolution. Introduction of a more advanced data handling algorithm arbitrary move of fuel assemblies can be followed either in the reactor core or storage pool. The new interactive WINDOWS applications allow easier and more reliable use of codes. All these computer code developments made possible to handle and calculate new kind of fuels as profiled Russian and BNFL fuel with burnable poison or to support the reliable reuse of fuel assemblies stored in the storage pool. To extend thermo-hydraulic capability, with KFKI contribution the COBRA code will also be coupled to the system (Authors)

  14. Dopamine reward prediction error coding.

    Science.gov (United States)

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  15. Helios, a 20 TW CO2 laser fusion facility

    International Nuclear Information System (INIS)

    Ladish, J.S.

    1979-01-01

    Since June 1978 the Los Alamos Scientific Laboratory's Helios CO 2 laser fusion facility has been committed to an experimental target program to investigate the feasibility of laser produced inertial confinement fusion. This system is briefly described, and preliminary experimental results are reported

  16. Calculations of the actinide transmutation with HELIOS for fuels of light water reactors

    International Nuclear Information System (INIS)

    Francois L, J.L.; Guzman A, J.R.

    2006-01-01

    In this work a comparison of the obtained results with the HELIOS code is made and those obtained by other similar codes, used in the international community, respect to the transmutation of smaller actinides. For this the one it is analyzed the international benchmark: 'Calculations of Different Transmutation Concepts', of the Nuclear Energy Agency. In this benchmark two cell types are analyzed: one small corresponding to a PWR standard, and another big one corresponding to a PWR highly moderated. Its are considered two types of burnt of discharge: 33 GWd/tHM and 50 GWd/tHM. The following types of results are approached: the k eff like a function of the burnt one, the atomic densities of the main isotopes of the actinides, the radioactivities in the moment in that the reactor it is off and in the times of cooling from 7 up to 50000 years, the reactivity by holes and the Doppler reactivity. The results are compared with those obtained by the following institutions: FZK (Germany), JAERI (Japan), ITEP (Russia) and IPPE (Russian Federation). In the case of the eigenvalue, the obtained results with HELIOS showed a discrepancy around 3% Δk/k, which was also among other participants. For the isotopic concentrations: 241 Pu, 242 Pu and 242m Am the results of all the institutions present a discrepancy bigger every time, as the burnt one increases. Regarding the activities, the discrepancy of results is acceptable, except in the case of the 241 Pu. In the case of the Doppler coefficients the discrepancy of results is acceptable, except for the cells with high moderation; in the case of the holes coefficients, the discrepancy of results increases in agreement with the holes fraction increases, being quite high to 95% of holes. In general, the results are consistent and in good agreement with those obtained by all the participants in the benchmark. The results are inside of the established limits by the work group on Plutonium Fuels and Innovative Fuel Cycles of the Nuclear

  17. Verification and uncertainty evaluation of HELIOS/MASTER nuclear design system

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jae Seung; Kim, J. C.; Cho, B. O. [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-03-01

    A nuclear design system HELIOS/MASTER was established and core follow calculations were performed for Yonggwang Unit 1 cycles 1 through 7 and Yonggwang Unit 3 cycles 1 through 2. The accuracy of HELIOS/MASTER system was evaluated by estimations of uncertainties of reactivity and peaking factors and by comparisons of the maximum differences of isothermal temperature coefficient, inverse boron worth and control rod worth with the CASMO-3/MASTER uncertainties. The reactivity uncertainty was estimated by 362 pcm, and the uncertainties of three-dimensional, axially integrated radial, and planar peaking factors were evaluated by 0.048, 0.034, and 0.044 in relative power unit, respectively. The maximum differences of isothermal temperature coefficient, inverse boron worth and control rod worth were within the CASMO-3/MASTER uncertainties. 17 refs., 17 figs., 10 tabs. (Author)

  18. Verification of spectral burn-up codes on 2D fuel assemblies of the GFR demonstrator ALLEGRO reactor

    International Nuclear Information System (INIS)

    Čerba, Štefan; Vrban, Branislav; Lüley, Jakub; Dařílek, Petr; Zajac, Radoslav; Nečas, Vladimír; Haščik, Ján

    2014-01-01

    Highlights: • Verification of the MCNPX, HELIOS and SCALE codes. • MOX and ceramic fuel assembly. • Gas-cooled fast reactor. • Burnup calculation. - Abstract: The gas-cooled fast reactor, which is one of the six GEN IV reactor concepts, is characterized by high operational temperatures and a hard neutron spectrum. The utilization of commonly used spectral codes, developed mainly for LWR reactors operated in the thermal/epithermal neutron spectrum, may be connected with systematic deviations since the main development effort of these codes has been focused on the thermal part of the neutron spectrum. To be able to carry out proper calculations for fast systems the used codes have to account for neutron resonances including the self-shielding effect. The presented study aims at verifying the spectral HELIOS, MCNPX and SCALE codes on the basis of depletion calculations of 2D MOX and ceramic fuel assemblies of the ALLEGRO gas-cooled fast reactor demonstrator in infinite lattice

  19. Helios Prototype on lakebed during ground check of electric motors

    Science.gov (United States)

    1999-01-01

    The Helios Prototype is an enlarged version of the Centurion flying wing, which flew a series of test flights at Dryden in late 1998. The craft has a wingspan of 247 feet, 41 feet greater than the Centurion, 2 1/2 times that of its solar-powered Pathfinder flying wing, and longer than either the Boeing 747 jetliner or Lockheed C-5 transport aircraft. Helios is one of several remotely-piloted aircraft-also known as uninhabited aerial vehicles or UAV's-being developed as technology demonstrators by several small airframe manufacturers under NASA's Environmental Research Aircraft and Sensor Technology (ERAST) project. Developed by AeroVironment, Inc., of Monrovia, Calif., the unique craft is intended to demonstrate two key missions: the ability to reach and sustain horizontal flight at 100,000 feet altitude on a single-day flight, and to maintain flight above 50,000 feet altitude for at least four days, both on electrical power derived from non-polluting solar energy. During later flights, AeroVironment's flight test team will evaluate new motor-control software which may allow the pitch of the aircraft-the nose-up or nose-down attitude in relation to the horizon-to be controlled entirely by the motors. If successful, productions versions of the Helios could eliminate the elevators on the wing's trailing edge now used for pitch control, saving weight and increasing the area of the wing available for installation of solar cells.

  20. The interactions of the HELIOS probe with the solar wind plasma

    International Nuclear Information System (INIS)

    Voigt, G.H.; Isensee, U.; Maassberg, H.

    1981-08-01

    HELIOS solar probe disturbs the solar wind plasma in the near vicinity. Around the probe, a space charge cloud is formed due to strong photoelectron emission and fade out of solar wind particles. The conducting and isolating parts of the surface are differently charged. These effects result in a very complex potential structure in the vicinity of the probe and on the surface. The interactions of the HELIOS probe with the solar wind plasma are described by models based on kinetic theory of plasma. The combination of these models yields an entire and consistent representation of the spacecraft charging and the potential structure. Electron spectra measured by plasma experiment E1 are analysed and compared with results of the theoretical models. (orig.) [de

  1. Simulation and verification studies of reactivity initiated accident by comparative approach of NK/TH coupling codes and RELAP5 code

    Energy Technology Data Exchange (ETDEWEB)

    Ud-Din Khan, Salah [Chinese Academy of Sciences, Hefei (China). Inst. of Plasma Physics; King Saud Univ., Riyadh (Saudi Arabia). Sustainable Energy Technologies Center; Peng, Minjun [Harbin Engineering Univ. (China). College of Nuclear Science and Technology; Yuntao, Song; Ud-Din Khan, Shahab [Chinese Academy of Sciences, Hefei (China). Inst. of Plasma Physics; Haider, Sajjad [King Saud Univ., Riyadh (Saudi Arabia). Sustainable Energy Technologies Center

    2017-02-15

    The objective is to analyze the safety of small modular nuclear reactors of 220 MWe power. Reactivity initiated accidents (RIA) were investigated by neutron kinetic/thermal hydraulic (NK/TH) coupling approach and thermal hydraulic code i.e., RELAP5. The results obtained by these approaches were compared for validation and accuracy of simulation. In the NK/TH coupling technique, three codes (HELIOS, REMARK, THEATRe) were used. These codes calculate different parameters of the reactor core (fission power, reactivity, fuel temperature and inlet/outlet temperatures). The data exchanges between the codes were assessed by running the codes simultaneously. The results obtained from both (NK/TH coupling) and RELAP5 code analyses complement each other, hence confirming the accuracy of simulation.

  2. Predictive coding in Agency Detection

    DEFF Research Database (Denmark)

    Andersen, Marc Malmdorf

    2017-01-01

    Agency detection is a central concept in the cognitive science of religion (CSR). Experimental studies, however, have so far failed to lend support to some of the most common predictions that follow from current theories on agency detection. In this article, I argue that predictive coding, a highly...... promising new framework for understanding perception and action, may solve pending theoretical inconsistencies in agency detection research, account for the puzzling experimental findings mentioned above, and provide hypotheses for future experimental testing. Predictive coding explains how the brain......, unbeknownst to consciousness, engages in sophisticated Bayesian statistics in an effort to constantly predict the hidden causes of sensory input. My fundamental argument is that most false positives in agency detection can be seen as the result of top-down interference in a Bayesian system generating high...

  3. Evolución de estrellas enanas blancas de Helio de masa baja e intermedia

    Science.gov (United States)

    Althaus, L. G.; Benvenuto, O. G.

    Numerosas observaciones realizadas particularmente en los últimos dos años parecen confirmar que las enanas blancas (EB) de helio de masa baja e intermedia son el resultado de la evolución de sistemas binarios cercanos. Con el objeto de realizar una adecuada interpretación de estos objetos son necesarios modelos de EBs de helio lo más detallado posibles. En este estudio presentamos cálculos detallados de la evolución de EBs de helio con masas entre M=0.1Msolar y M=0.5Msolar a intervalos de 0.05Msolar . Para ello, hemos tenido en cuenta los efectos de temperatura finita mediante un código de evolución estelar lo más actualizado posible. En particular, el transporte de energía es descripto en el marco del nuevo modelo para la convección turbulenta desarrollado por Canuto - Mazzitelli. Además hemos considerado la nueva ecuación de estado para plasmas de helio de Saumon et al. y nuevas opacidades radiativas OPAL. Las pérdidas por neutrinos fueron asimismo tenidas en cuenta. Excepto para las EBs más masivas, nuestros modelos iniciales están ubicados en las cercanías de la correspondiente línea de Hayashi para configuraciones de helio. Nuestros resultados muestran que existe una región prohibida en el diagrama observacional HR donde ninguna EB de helio puede encontrarse. Dicha región es para log{(L/Lsolar )}>= -0.25 and log{Teff} >= 4.45. Hemos encontrado también que los tracks evolutivos en el diagrama HR en el dominio de alta luminosidad (pre - EB) son fuertemente afectados por la eficiencia convectiva y que las pérdidas por neutrinos son importantes en los modelos más masivos. Finalmente hemos analizado la estructura de la zona convectiva externa encontrando que la teoría de Canuto - Mazzitelli conduce a un perfil convectivo muy diferente del dado por cualquier versión de la popular teoría de la mixing length. Si bién este comportamiento es decisivo en el contexto de las inestabilides pulsacionales, los radios y gravedades superficiales de

  4. Neural Elements for Predictive Coding

    Directory of Open Access Journals (Sweden)

    Stewart SHIPP

    2016-11-01

    Full Text Available Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backwards in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many ‘illusory’ instances of perception where what is seen (heard, etc is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forwards and backwards pathways should be completely separate, given their functional distinction; this aspect of circuitry – that neurons with extrinsically bifurcating axons do not project in both directions – has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy formulation of predictive coding is combined with the classic ‘canonical microcircuit’ and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a updates in the microcircuitry of primate visual cortex, and (b rapid technical advances made

  5. Neural Elements for Predictive Coding.

    Science.gov (United States)

    Shipp, Stewart

    2016-01-01

    Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backward in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many 'illusory' instances of perception where what is seen (heard, etc.) is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forward and backward pathways should be completely separate, given their functional distinction; this aspect of circuitry - that neurons with extrinsically bifurcating axons do not project in both directions - has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy) formulation of predictive coding is combined with the classic 'canonical microcircuit' and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a) updates in the microcircuitry of primate visual cortex, and (b) rapid technical advances made possible by transgenic neural

  6. Users guide to the HELIOS backscattering spectrometer (BSS)

    International Nuclear Information System (INIS)

    Bunce, L.J.

    1986-10-01

    The BSS is a backscattering spectrometer installed on the Harwell 136 Mev electron linear accelerator, HELIOS. A general description of the instrument is given, along with the time of flight scales, and the run and sample changer control units. The sample environment, vacuum system and detectors of the BSS are described, as well as the preparation, starting and running of an experiment using the BSS. (UK)

  7. AeroVironment technician checks a Helios solar cell panel

    Science.gov (United States)

    2000-01-01

    A technician at AeroVironment's Design Development Center in Simi Valley, California, checks a panel of silicon solar cells for conductivity and voltage. The bi-facial cells, fabricated by SunPower, Inc., of Sunnyvale, California, are among 64,000 solar cells which have been installed on the Helios Prototype solar-powered aircraft to provide power to its 14 electric motors and operating systems. Developed by AeroVironment under NASA's Environmental Research Aircraft and Sensor Technology (ERAST) project, the Helios Prototype is the forerunner of a planned fleet of slow-flying, long duration, high-altitude aircraft which can perform atmospheric science missions and serve as telecommunications relay platforms in the stratosphere. Target goals set by NASA for the giant 246-foot span flying wing include reaching and sustaining subsonic horizontal flight at 100,000 feet altitude in 2001, and sustained continuous flight for at least four days and nights above 50,000 feet altitude with the aid of a regenerative fuel cell-based energy storage system now under development in 2003.

  8. Development and Implementation of a New HELIOS Diagnostic using a Fast Piezoelectric Valve on the Prototype Material Plasma Exposure eXperiment

    Science.gov (United States)

    Ray, Holly; Biewer, Theodore; Caneses, Juan; Green, Jonathan; Lindquist, Elizabeth; McQuown, Levon; Schmitz, Oliver

    2017-10-01

    A new helium line-ratio spectral monitoring (HELIOS) diagnostic, using a piezoelectric valve with high duty cycles (on/off times ms), allowing for good background correction, and measured particle flowrates on the order of 1020 particles/second is being implemented on Oak Ridge National Laboratory's (ORNL) Prototype Material Plasma Exposure eXperiment (Proto-MPEX). Built in collaboration with the University of Wisconsin - Madison, the HELIOS diagnostic communicates with a Labview program for controlled bursts of helium into the vessel. The open magnetic geometry of Proto-MPEX is ideal for testing and characterizing a HELIOS diagnostic. The circular cross-section with four ports allows for cross comparison between different diagnostics: 1) Helium injection with the piezoelectric puff valve, 2) HELIOS line-of-sight high-gain observation, 3) scan-able Double Langmuir probe, and 4) HELIOS 2D imaging observation. Electron density and temperature measurements from the various techniques will be compared. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725 and DE-SC00013911.

  9. Automatic target alignment of the Helios laser system

    International Nuclear Information System (INIS)

    Liberman, I.; Viswanathan, V.K.; Klein, M.; Seery, B.D.

    1980-01-01

    An automatic target-alignment technique for the Helios laser facility is reported and verified experimentally. The desired alignment condition is completely described by an autocollimation test. A computer program examines the autocollimated return pattern from the surrogate target and correctly describes any changes required in mirror orientation to yield optimum targe alignment with either aberrated or misaligned beams. Automated on-line target alignment is thus shown to be feasible

  10. HELIOS: transformation laws for multiple-collision probabilities with angular dependence

    International Nuclear Information System (INIS)

    Villarino, E.A.; Stamm'ler, R.J.J.

    1996-01-01

    In the lattice code HELIOS, neutron and gamma transport in a given system is treated by the CCCP (current-coupling collision-probability) method. The system is partitioned into space elements which are coupled by currents. Inside the space elements first-flight probabilities are used to obtain the coefficients of the coupling equation and of the equations for the fluxes. The calculation of these coefficients is expensive in CPU time on two scores: the evaluation of the first-flight probabilities, and the matrix inversion to convert these probabilities into the desired coefficients. If the cross sections of two geometrically equal space elements, or of the same element at an earlier burnup level, differ less than a small fraction, considerable CPU time can be saved by using transformation laws. Previously, such laws were derived for first-flight probabilities; here, they are derived for the multiple-collision coefficients of the CCCP equations. They avoid not only the expensive calculations of the first-flight probabilities, but also the subsequent matrix inversion. Various examples illustrate the savings achieved by using these new transformation laws - or by directly using earlier calculated coefficients, if the cross section differences are negligible. (author)

  11. Relativity experiment on Helios - A status report

    Science.gov (United States)

    Anderson, J. D.; Melbourne, W. G.; Cain, D. L.; Lau, E. K.; Wong, S. K.; Kundt, W.

    1975-01-01

    The relativity experiment on Helios (Experiment 11) uses S-band and Doppler data, and spacecraft-solar-orbital data to measure the effects of general relativity in the solar system and the quadrupole moment in the solar gravitational field. Specifically, Experiment 11 is converned with measuring the following effects: (1) relativistic orbital corrections described by two parameters of the space-time metric which are both equal to unity in Einstein's theory; (2) orbital perturbations caused by a finite quadrupole moment of an oblate sun, described by zonal harmonics in the solar gravitational field.

  12. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Science.gov (United States)

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  13. Helios-1 Faraday rotation experiment - Results and interpretations of the solar occultations in 1975

    Science.gov (United States)

    Volland, H.; Bird, M. K.; Levy, G. S.; Stelzried, C. T.; Seidel, B. L.

    1977-01-01

    The first of two solar occultations of the satellite Helios-1 in 1975 occurred in April when the satellite's ray path approached the west limb of the sun to a minimum distance of 1.63 solar radii. The second occultation took place in late August/early September when Helios-1 was totally eclipsed by the photosphere. Measurements of the polarization angle of the linearly polarized telemetry signal were performed with automatic tracking polarimeters at the 64 m Goldstone Tracking Station in California and also at the 100 m radio telescope in Effelsberg, West Germany. The coronal Faraday rotation as a function of the solar offset for both occultations is shown in graphs. The theoretical significance of the observations is investigated.

  14. Technician Marshall MacCready installs solar cells on the Helios Prototype

    Science.gov (United States)

    2000-01-01

    Technician Marshall MacCready carefully lays a panel of solar cells into place on a wing section of the Helios Prototype flying wing at AeroVironment's Design Development Center in Simi Valley, California. The bi-facial cells, manufactured by SunPower, Inc., of Sunnyvale, California, are among 64,000 solar cells which have been installed on the solar-powered aircraft to provide electricity to its 14 motors and operating systems. Developed by AeroVironment under NASA's Environmental Research Aircraft and Sensor Technology (ERAST) project, the Helios Prototype is the forerunner of a planned fleet of slow-flying, long duration, high-altitude aircraft which can perform atmospheric science missions and serve as telecommunications relay platforms in the stratosphere. Target goals set by NASA for the giant 246-foot span flying wing include reaching and sustaining subsonic horizontal flight at 100,000 feet altitude in 2001, and sustained continuous flight for at least four days and nights above 50,000 feet altitude 2003 with the aid of a regenerative fuel cell-based energy storage system now being developed.

  15. ANDREA: Advanced nodal diffusion code for reactor analysis

    International Nuclear Information System (INIS)

    Belac, J.; Josek, R.; Klecka, L.; Stary, V.; Vocka, R.

    2005-01-01

    A new macro code is being developed at NRI which will allow coupling of the advanced thermal-hydraulics model with neutronics calculations as well as efficient use in core loading pattern optimization process. This paper describes the current stage of the macro code development. The core simulator is based on the nodal expansion method, Helios lattice code is used for few group libraries preparation. Standard features such as pin wise power reconstruction and feedback iterations on critical control rod position, boron concentration and reactor power are implemented. A special attention is paid to the system and code modularity in order to enable flexible and easy implementation of new features in future. Precision of the methods used in the macro code has been verified on available benchmarks. Testing against Temelin PWR operational data is under way (Authors)

  16. Application of discontinuity factors in C-PORCA 7 code

    International Nuclear Information System (INIS)

    Pos, I.; Parko, T.; Szabo, S. P.

    2010-01-01

    During last years there were up-rated the reactor power up to 1485 MW and new fuel types have been utilised at the Paks NPP. To fulfil the demand of the accuracy and correctness of on-line core monitoring and off-line core analysis the HELIOS/C-PORCA models have been modernised as well. The main step of this developing process was to change the mathematics of the 3D two group diffusion model on the basis of hybrid finite element method. The upgraded mathematics gave very good results comparing the C-PORCA calculations against mathematical benchmarks and measurements of different units and cycles of NPP Paks and Mochovce. As a final step of the modernisation process the application of flux discontinuity factors has been made. In the frame of VVER community the usage of this parameter in core analyses codes is very unusual in contrast with codes for the same purpose in western countries. In this paper both the reason of the introduction of discontinuity factors into HELIOS/C-PORCA models and its effect on the accuracy of calculation are also presented. We tried to emphasise which kind of codes and which kind of reactor-physical parameters can be influenced mainly by discontinuity factors. The method of the calculation of flux discontinuity factors in fuel and non-fuel regions of the core is also described. As the most important effect of the utilisation of this parameter was that almost all fittings in C-PORCA code based on in-core measurements have became needless. (Authors)

  17. Development and Validation of a Momentum Integral Numerical Analysis Code for Liquid Metal Fast Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xiangyi; Suh, Kune Y. [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    In this work, this benchmark problem is conducted to assess the precision of the upgraded in-house code MINA. Comparison of the results from different best estimate codes employed by various grid spacer pressure drop correlations is carried out to suggest the best one. By modifying In's method, it presents good agreement with the experiment data which is shown in Figure 7. The reason for the failure of the prediction in previous work is caused by the utilization of Rehme's method which is categorized into four groups according to different fitting strategy. Through comparison of drag coefficients calculated by four groups of Rheme's method, equivalent drag coefficient calculated by In's method and experiment data shown in Figure 8, we can conclude that Rehme's method considerably underestimate the drag coefficients in grid spacers used in HELIOS and In's method give a reasonable prediction. Starting from the core inlet, the accumulated pressure losses are presented in figure 9 along the accumulated length of the forced convection flow path; the good agreement of the prediction from MINA with the experiment result shows MINA has very good capability in integrated momentum analysis makes it robust in the future design scoping method development of LFR.

  18. Development and Validation of a Momentum Integral Numerical Analysis Code for Liquid Metal Fast Reactor

    International Nuclear Information System (INIS)

    Chen, Xiangyi; Suh, Kune Y.

    2016-01-01

    In this work, this benchmark problem is conducted to assess the precision of the upgraded in-house code MINA. Comparison of the results from different best estimate codes employed by various grid spacer pressure drop correlations is carried out to suggest the best one. By modifying In's method, it presents good agreement with the experiment data which is shown in Figure 7. The reason for the failure of the prediction in previous work is caused by the utilization of Rehme's method which is categorized into four groups according to different fitting strategy. Through comparison of drag coefficients calculated by four groups of Rheme's method, equivalent drag coefficient calculated by In's method and experiment data shown in Figure 8, we can conclude that Rehme's method considerably underestimate the drag coefficients in grid spacers used in HELIOS and In's method give a reasonable prediction. Starting from the core inlet, the accumulated pressure losses are presented in figure 9 along the accumulated length of the forced convection flow path; the good agreement of the prediction from MINA with the experiment result shows MINA has very good capability in integrated momentum analysis makes it robust in the future design scoping method development of LFR.

  19. A comparison of oxide thickness predictability from the perspective of codes

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo-Young; Shin, Hye-In; Kim, Kyung-Tae; Han, Hee-Tak; Kim, Hong-Jin; Kim, Yong-Hwan [KEPCO Nuclear Fuel Co. Ltd., Daejeon (Korea, Republic of)

    2016-10-15

    In Korea, OPR1000 and Westinghouse type nuclear power plant reactor fuel rods oxide thickness has been evaluated by imported code A. Because of this, there have been multiple constraints in operation and maintenance of fuel rod design system. For this reason, there has been a growing demand to establish an independent fuel rod design system. To meet this goal, KNF has recently developed its own code B for fuel rod design. The objective of this study is to compare oxide thickness prediction performance between code A and code B and to check the validity of predicting corrosion behaviors of newly developed code B. This study is based on Pool Side Examination (PSE) data for the performance confirmation. For the examination procedures, the oxide thickness measurement methods and equipment of PSE are described in detail. In this study, code B is confirmed conservatism and validity on evaluating cladding oxide thickness through the comparison with code A. Code prediction values show higher value than measured data from PSE. Throughout this study, the values by code B are evaluated and proved to be valid in a view point of the oxide thickness evaluation. However, the code B input for prediction has been made by designer's judgment with complex handwork that might be lead to excessive conservative result and ineffective design process with some possibility of errors.

  20. Decision-making in schizophrenia: A predictive-coding perspective.

    Science.gov (United States)

    Sterzer, Philipp; Voss, Martin; Schlagenhauf, Florian; Heinz, Andreas

    2018-05-31

    Dysfunctional decision-making has been implicated in the positive and negative symptoms of schizophrenia. Decision-making can be conceptualized within the framework of hierarchical predictive coding as the result of a Bayesian inference process that uses prior beliefs to infer states of the world. According to this idea, prior beliefs encoded at higher levels in the brain are fed back as predictive signals to lower levels. Whenever these predictions are violated by the incoming sensory data, a prediction error is generated and fed forward to update beliefs encoded at higher levels. Well-documented impairments in cognitive decision-making support the view that these neural inference mechanisms are altered in schizophrenia. There is also extensive evidence relating the symptoms of schizophrenia to aberrant signaling of prediction errors, especially in the domain of reward and value-based decision-making. Moreover, the idea of altered predictive coding is supported by evidence for impaired low-level sensory mechanisms and motor processes. We review behavioral and neural findings from these research areas and provide an integrated view suggesting that schizophrenia may be related to a pervasive alteration in predictive coding at multiple hierarchical levels, including cognitive and value-based decision-making processes as well as sensory and motor systems. We relate these findings to decision-making processes and propose that varying degrees of impairment in the implicated brain areas contribute to the variety of psychotic experiences. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Helio-thermal energy generation: a new option of clean energy generation for Brazil

    International Nuclear Information System (INIS)

    Cavalcanti, Evando Sergio Camelo; Brito, Rubem Bastos Sanches de

    1999-01-01

    This work analyses the three most developed helio-thermal technologies, as follows: parabolic cylinder, central tower and parabolic disk. The electric power demand for the region is forecasted to be approximately 4,500-7,500 MW, in accordance with the type of used irrigation technology and the distance to the source

  2. Using of the Serpent code based on the Monte-Carlo method for calculation of the VVER-1000 fuel assembly characteristics

    Directory of Open Access Journals (Sweden)

    V. V. Galchenko

    2016-12-01

    Full Text Available The description of calculation scheme of fuel assembly for preparation of few-group characteristics is considered with help of Serpent code. This code uses the Monte-Carlo method and energy continuous microscopic data libraries. Serpent code is devoted for calculation of fuel assembly characteristics, burnup calculations and preparation of few-group homogenized macroscopic cross-sections. The results of verification simulations in comparison with other codes (WIMS, HELIOS, NESSEL etc., which are used for neutron-physical analysis of VVER type fuel, are presented.

  3. Comet West: a view from the HELIOS zodiacal light photometers

    International Nuclear Information System (INIS)

    Benensohn, R.M.; Jackson, B.V.

    1987-01-01

    Comet West passed through perihelion on February 25, 1976. The comet crossed the HELIOS A and B spacecraft zodiacal light photometer fields of view as the spacecraft orbited the Sun, allowing them to record the brightness, polarization, and color of the comet and its surrounding interplanetary medium. Data from the U, B, and V photometers across the tail shows a distinct bluing followed by a slight reddening corresponding to the ion and dust tails, respectively, entering the field of view. The non-Earth perspective of the HELIOS photometers allows a comparison of the tail with Earth observations at the same time. Precise location of the nucleus and tail allow the photometer data to be searched for evidence of the comet bow shock and orbital dust. A brightness bump present in the data before the comet reaches some photometer positions, can be shown to approximately form a parabolic shape Sunward and ahead of the orbital motion of the Comet West nucleus. If this is the comet bow shock or bow compression, then it corresponds to a density enhancement of the ambient medium by 1.5 to 2 times in the vicinity of the comet. The distance of the brightness increase from the nucleus by comparison with Comet Halley implies a neutral gas production rate of approximately 3 times that of Halley

  4. Evolutionary modeling and prediction of non-coding RNAs in Drosophila.

    Directory of Open Access Journals (Sweden)

    Robert K Bradley

    2009-08-01

    Full Text Available We performed benchmarks of phylogenetic grammar-based ncRNA gene prediction, experimenting with eight different models of structural evolution and two different programs for genome alignment. We evaluated our models using alignments of twelve Drosophila genomes. We find that ncRNA prediction performance can vary greatly between different gene predictors and subfamilies of ncRNA gene. Our estimates for false positive rates are based on simulations which preserve local islands of conservation; using these simulations, we predict a higher rate of false positives than previous computational ncRNA screens have reported. Using one of the tested prediction grammars, we provide an updated set of ncRNA predictions for D. melanogaster and compare them to previously-published predictions and experimental data. Many of our predictions show correlations with protein-coding genes. We found significant depletion of intergenic predictions near the 3' end of coding regions and furthermore depletion of predictions in the first intron of protein-coding genes. Some of our predictions are colocated with larger putative unannotated genes: for example, 17 of our predictions showing homology to the RFAM family snoR28 appear in a tandem array on the X chromosome; the 4.5 Kbp spanned by the predicted tandem array is contained within a FlyBase-annotated cDNA.

  5. Some benchmark calculations for VVER-1000 assemblies by WIMS-7B code

    International Nuclear Information System (INIS)

    Sultanov, N.V.

    2001-01-01

    Our aim in this report is to compare of calculation results, obtained with the use of different libraries, which are in the variant of the WIMS7B code. We had the three libraries: the 1986 library is based on the UKNDL files, the two 1996 libraries are based on the JEF-2.2 files, the one having the 69 group approximation, the other having the 172 group approximation. We wanted also to have some acquaintance with the new option of WIMS-7B - CACTUS. The variant of WIMS-7B was placed at our disposal by the code authors for a temporal use for 9 months. It was natural to make at comparisons with analogous values of TVS-M, MCU, Apollo-2, Casmo-4, Conkemo, MCNP, HELIOS codes, where the other different libraries were used. In accordance with our aims the calculations of unprofiled and profiled assemblies of the VVER-1000 reactor have been carried out by the option CACTUS. This option provides calculations by the characteristics method. The calculation results have been compared with the K ∞ values obtained by other codes in work. The conclusion from this analysis is such: the methodical parts of errors of these codes have nearly the same values. Spacing for K eff values can be explained of the library microsections differences mainly. Nevertheless, the more detailed analysis of the results obtained is required. In conclusion the calculation of a depletion of VVER-1000 cell has been carried out. The comparison of the dependency of the multiply factor from the depletion obtained by WIMS-7B with different libraries and by the TVS-M, MCU, HELIOS and WIMS-ABBN codes in work has been performed. (orig.)

  6. Assessment of subchannel code ASSERT-PV for flow-distribution predictions

    International Nuclear Information System (INIS)

    Nava-Dominguez, A.; Rao, Y.F.; Waddington, G.M.

    2014-01-01

    Highlights: • Assessment of the subchannel code ASSERT-PV 3.2 for the prediction of flow distribution. • Open literature and in-house experimental data to quantify ASSERT-PV predictions. • Model changes assessed against vertical and horizontal flow experiments. • Improvement of flow-distribution predictions under CANDU-relevant conditions. - Abstract: This paper reports an assessment of the recently released subchannel code ASSERT-PV 3.2 for the prediction of flow-distribution in fuel bundles, including subchannel void fraction, quality and mass fluxes. Experimental data from open literature and from in-house tests are used to assess the flow-distribution models in ASSERT-PV 3.2. The prediction statistics using the recommended model set of ASSERT-PV 3.2 are compared to those from previous code versions. Separate-effects sensitivity studies are performed to quantify the contribution of each flow-distribution model change or enhancement to the improvement in flow-distribution prediction. The assessment demonstrates significant improvement in the prediction of flow-distribution in horizontal fuel channels containing CANDU bundles

  7. Assessment of subchannel code ASSERT-PV for flow-distribution predictions

    Energy Technology Data Exchange (ETDEWEB)

    Nava-Dominguez, A., E-mail: navadoma@aecl.ca; Rao, Y.F., E-mail: raoy@aecl.ca; Waddington, G.M., E-mail: waddingg@aecl.ca

    2014-08-15

    Highlights: • Assessment of the subchannel code ASSERT-PV 3.2 for the prediction of flow distribution. • Open literature and in-house experimental data to quantify ASSERT-PV predictions. • Model changes assessed against vertical and horizontal flow experiments. • Improvement of flow-distribution predictions under CANDU-relevant conditions. - Abstract: This paper reports an assessment of the recently released subchannel code ASSERT-PV 3.2 for the prediction of flow-distribution in fuel bundles, including subchannel void fraction, quality and mass fluxes. Experimental data from open literature and from in-house tests are used to assess the flow-distribution models in ASSERT-PV 3.2. The prediction statistics using the recommended model set of ASSERT-PV 3.2 are compared to those from previous code versions. Separate-effects sensitivity studies are performed to quantify the contribution of each flow-distribution model change or enhancement to the improvement in flow-distribution prediction. The assessment demonstrates significant improvement in the prediction of flow-distribution in horizontal fuel channels containing CANDU bundles.

  8. Evolving a Dynamic Predictive Coding Mechanism for Novelty Detection

    OpenAIRE

    Haggett, Simon J.; Chu, Dominique; Marshall, Ian W.

    2007-01-01

    Novelty detection is a machine learning technique which identifies new or unknown information in data sets. We present our current work on the construction of a new novelty detector based on a dynamical version of predictive coding. We compare three evolutionary algorithms, a simple genetic algorithm, NEAT and FS-NEAT, for the task of optimising the structure of an illustrative dynamic predictive coding neural network to improve its performance over stimuli from a number of artificially gener...

  9. Comparison of GLIMPS and HFAST Stirling engine code predictions with experimental data

    Science.gov (United States)

    Geng, Steven M.; Tew, Roy C.

    1992-01-01

    Predictions from GLIMPS and HFAST design codes are compared with experimental data for the RE-1000 and SPRE free piston Stirling engines. Engine performance and available power loss predictions are compared. Differences exist between GLIMPS and HFAST loss predictions. Both codes require engine specific calibration to bring predictions and experimental data into agreement.

  10. Comparison of GLIMPS and HFAST Stirling engine code predictions with experimental data

    International Nuclear Information System (INIS)

    Geng, S.M.; Tew, R.C.

    1994-01-01

    Predictions from GLIMPS and HFAST design codes are compared with experimental data for the RE-1000 and SPRE free-piston Stirling engines. Engine performance and available power loss predictions are compared. Differences exist between GLIMPS and HFAST loss predictions. Both codes require engine-specific calibration to bring predictions and experimental data into agreement

  11. HelioTrope: An innovative and efficient prototype for solar power production

    Directory of Open Access Journals (Sweden)

    Papageorgiou George

    2014-01-01

    Full Text Available The solar energy alternative could provide us with all the energy we need as it exist in vast quantities all around us. We only should be innovative enough in order to improve the efficiency of our systems in capturing and converting solar energy in usable forms of power. By making a case for the solar energy alternative, we identify areas where efficiency can be improved and thereby Solar Energy can become a competitive energy source. This paper suggests an innovative approach to solar energy power production, which is manifested in a prototype given the name HelioTrope. The Heliotrope Solar Energy Production prototype is tested on its' capabilities to efficiently covert solar energy to generation of electricity and other forms of energy for storage or direct use. HelioTrope involves an innovative Stirling engine design and a parabolic concentrating dish with a sun tracking system implementing a control algorithm to maximize the capturing of solar energy. Further, it utilizes a patent developed by the authors where a mechanism is designed for the transmission of reciprocating motion of variable amplitude into unidirectional circular motion. This is employed in our prototype for converting linear reciprocating motion into circular for electricity production, which gives a significant increase in efficiency and reduces maintenance costs. Preliminary calculations indicate that the Heliotrope approach constitutes a competitive solution to solar power production.

  12. Prediction of the local power factor in BWR fuel cells by means of a multilayer neural network

    International Nuclear Information System (INIS)

    Montes, J.L.; Ortiz, J.J.; Perusquia C, R.; Francois, J.L.; Martin del Campo M, C.

    2007-01-01

    To the beginning of a new operation cycle in a BWR reactor the reactivity of this it increases by means of the introduction of fresh fuel, the one denominated reload fuel. The problem of the definition of the characteristics of this reload fuel represents a combinatory optimization problem that requires significantly a great quantity of CPU time for their determination. This situation has motivated to study the possibility to substitute the Helios code, the one which is used to generate the new cells of the reload fuel parameters, by an artificial neuronal network, with the purpose of predicting the parameters of the fuel reload cell of a BWR reactor. In this work the results of the one training of a multilayer neuronal net that can predict the local power factor (LPPF) in such fuel cells are presented. The prediction of the LPPF is carried out in those condition of beginning of the life of the cell (0.0 MWD/T, to 40% of holes in the one moderator, temperature of 793 K in the fuel and a moderator temperature of 560 K. The cells considered in the present study consist of an arrangement of 10x10 bars, of those which 92 contains U 235 , some of these bars also contain a concentration of Gd 2 O 3 and 8 of them contain only water. The axial location inside the one assembles of recharge of these cells it is exactly up of the cells that contain natural uranium in the base of the reactor core. The training of the neuronal net is carried out by means of a retro-propagation algorithm that uses a space of training formed starting from previous evaluations of cells by means of the Helios code. They are also presented the results of the application of the neuronal net found for the prediction of the LPPF of some cells used in the real operation of the Unit One of the Laguna Verde Nuclear Power station. (Author)

  13. Predictive Bias and Sensitivity in NRC Fuel Performance Codes

    Energy Technology Data Exchange (ETDEWEB)

    Geelhood, Kenneth J.; Luscher, Walter G.; Senor, David J.; Cunningham, Mitchel E.; Lanning, Donald D.; Adkins, Harold E.

    2009-10-01

    The latest versions of the fuel performance codes, FRAPCON-3 and FRAPTRAN were examined to determine if the codes are intrinsically conservative. Each individual model and type of code prediction was examined and compared to the data that was used to develop the model. In addition, a brief literature search was performed to determine if more recent data have become available since the original model development for model comparison.

  14. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    Science.gov (United States)

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  15. Assessment of neutron transport codes for application to CANDU fuel lattices analysis

    International Nuclear Information System (INIS)

    Roh, Gyu Hong; Choi, Hang Bok

    1999-08-01

    In order to assess the applicability of WIMS-AECL and HELIOS code to the CANDU fuel lattice analysis, the physics calculations has been carried out for the standard CANDU fuel and DUPIC fuel lattices, and the results were compared with those of Monte Carlo code MCNP-4B. In this study, in order to consider the full isotopic composition and the temperature effect, new MCNP libraries have been generated from ENDF/B-VI release 3 and validated for typical benchmark problems. The TRX-1,2,BAPL-1,2,3 pin -cell lattices and KENO criticality safety benchmark calculations have been performed for the new MCNP libraries, and the results have shown that the new MCNP library has sufficient accuracy to be used for physics calculation. Then, the lattice codes have been benchmarked by the MCNP code for the major physics parameters such as the burnup reactivity, void reactivity, relative pin power and Doppler coefficient, etc. for the standard CANDU fuel and DUPIC fuel lattices. For the standard CANDU fuel lattice, it was found that the results of WIMS-AECL calculations are consistent with those of MCNP. For the DUPIC fuel lattice, however, the results of WIMS-AECL calculations with ENDF/B-V library have shown that the discrepancy from the results of MCNP calculations increases when the fuel burnup is relatively high. The burnup reactivities of WIMS-ACEL calculations with ENDF/B-VI library have shown excellent agreements with those of MCNP calculation for both the standard CANDU and DUPIC fuel lattices. However, the Doppler coefficient have relatively large discrepancies compared with MCNP calculations, and the difference increases as the fuel burns. On the other hand, the results of HELIOS calculation are consistent with those of MCNP even though the discrepancy is slightly larger compared with the case of the standard CANDU fuel lattice. this study has shown that the WIMS-AECL products reliable results for the natural uranium fuel. However, it is recommended that the WIMS

  16. Distribution and communication in software engineering environments. Application to the HELIOS Software Bus.

    OpenAIRE

    Jean, F. C.; Jaulent, M. C.; Coignard, J.; Degoulet, P.

    1991-01-01

    Modularity, distribution and integration are current trends in Software Engineering. To reach these goals HELIOS, a distributive Software Engineering Environment dedicated to the medical field, has been conceived and a prototype implemented. This environment is made by the collaboration of several, well encapsulated Software Components. This paper presents the architecture retained to allow communication between the different components and focus on the implementation details of the Software ...

  17. Great Expectations: Is there Evidence for Predictive Coding in Auditory Cortex?

    Science.gov (United States)

    Heilbron, Micha; Chait, Maria

    2017-08-04

    Predictive coding is possibly one of the most influential, comprehensive, and controversial theories of neural function. While proponents praise its explanatory potential, critics object that key tenets of the theory are untested or even untestable. The present article critically examines existing evidence for predictive coding in the auditory modality. Specifically, we identify five key assumptions of the theory and evaluate each in the light of animal, human and modeling studies of auditory pattern processing. For the first two assumptions - that neural responses are shaped by expectations and that these expectations are hierarchically organized - animal and human studies provide compelling evidence. The anticipatory, predictive nature of these expectations also enjoys empirical support, especially from studies on unexpected stimulus omission. However, for the existence of separate error and prediction neurons, a key assumption of the theory, evidence is lacking. More work exists on the proposed oscillatory signatures of predictive coding, and on the relation between attention and precision. However, results on these latter two assumptions are mixed or contradictory. Looking to the future, more collaboration between human and animal studies, aided by model-based analyses will be needed to test specific assumptions and implementations of predictive coding - and, as such, help determine whether this popular grand theory can fulfill its expectations. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  18. Real coded genetic algorithm for fuzzy time series prediction

    Science.gov (United States)

    Jain, Shilpa; Bisht, Dinesh C. S.; Singh, Phool; Mathpal, Prakash C.

    2017-10-01

    Genetic Algorithm (GA) forms a subset of evolutionary computing, rapidly growing area of Artificial Intelligence (A.I.). Some variants of GA are binary GA, real GA, messy GA, micro GA, saw tooth GA, differential evolution GA. This research article presents a real coded GA for predicting enrollments of University of Alabama. Data of Alabama University is a fuzzy time series. Here, fuzzy logic is used to predict enrollments of Alabama University and genetic algorithm optimizes fuzzy intervals. Results are compared to other eminent author works and found satisfactory, and states that real coded GA are fast and accurate.

  19. Mathematical Formulation used by MATLAB Code to Convert FTIR Interferograms to Calibrated Spectra

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, Derek Elswick [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-19

    This report discusses the mathematical procedures used to convert raw interferograms from Fourier transform infrared (FTIR) sensors to calibrated spectra. The work discussed in this report was completed as part of the Helios project at Los Alamos National Laboratory. MATLAB code was developed to convert the raw interferograms to calibrated spectra. The report summarizes the developed MATLAB scripts and functions, along with a description of the mathematical methods used by the code. The first step in working with raw interferograms is to convert them to uncalibrated spectra by applying an apodization function to the raw data and then by performing a Fourier transform. The developed MATLAB code also addresses phase error correction by applying the Mertz method. This report provides documentation for the MATLAB scripts.

  20. A predictive coding account of bistable perception - a model-based fMRI study.

    Science.gov (United States)

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  1. A predictive coding account of bistable perception - a model-based fMRI study.

    Directory of Open Access Journals (Sweden)

    Veith Weilnhammer

    2017-05-01

    Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together

  2. Modification in the FUDA computer code to predict fuel performance at high burnup

    Energy Technology Data Exchange (ETDEWEB)

    Das, M; Arunakumar, B V; Prasad, P N [Nuclear Power Corp., Mumbai (India)

    1997-08-01

    The computer code FUDA (FUel Design Analysis) participated in the blind exercises organized by the IAEA CRP (Co-ordinated Research Programme) on FUMEX (Fuel Modelling at Extended Burnup). While the code prediction compared well with the experiments at Halden under various parametric and operating conditions, the fission gas release and fission gas pressure were found to be slightly over-predicted, particularly at high burnups. In view of the results of 6 FUMEX cases, the main models and submodels of the code were reviewed and necessary improvements were made. The new version of the code FUDA MOD 2 is now able to predict fuel performance parameter for burn-ups up to 50000 MWD/TeU. The validation field of the code has been extended to prediction of thorium oxide fuel performance. An analysis of local deformations at pellet interfaces and near the end caps is carried out considering the hourglassing of the pellet by finite element technique. (author). 15 refs, 1 fig.

  3. Modification in the FUDA computer code to predict fuel performance at high burnup

    International Nuclear Information System (INIS)

    Das, M.; Arunakumar, B.V.; Prasad, P.N.

    1997-01-01

    The computer code FUDA (FUel Design Analysis) participated in the blind exercises organized by the IAEA CRP (Co-ordinated Research Programme) on FUMEX (Fuel Modelling at Extended Burnup). While the code prediction compared well with the experiments at Halden under various parametric and operating conditions, the fission gas release and fission gas pressure were found to be slightly over-predicted, particularly at high burnups. In view of the results of 6 FUMEX cases, the main models and submodels of the code were reviewed and necessary improvements were made. The new version of the code FUDA MOD 2 is now able to predict fuel performance parameter for burn-ups up to 50000 MWD/TeU. The validation field of the code has been extended to prediction of thorium oxide fuel performance. An analysis of local deformations at pellet interfaces and near the end caps is carried out considering the hourglassing of the pellet by finite element technique. (author). 15 refs, 1 fig

  4. Improving performance of single-path code through a time-predictable memory hierarchy

    DEFF Research Database (Denmark)

    Cilku, Bekim; Puffitsch, Wolfgang; Prokesch, Daniel

    2017-01-01

    -predictable memory hierarchy with a prefetcher that exploits the predictability of execution traces in single-path code to speed up code execution. The new memory hierarchy reduces both the cache-miss penalty time and the cache-miss rate on the instruction cache. The benefit of the approach is demonstrated through...

  5. Sparsity in Linear Predictive Coding of Speech

    DEFF Research Database (Denmark)

    Giacobello, Daniele

    of the effectiveness of their application in audio processing. The second part of the thesis deals with introducing sparsity directly in the linear prediction analysis-by-synthesis (LPAS) speech coding paradigm. We first propose a novel near-optimal method to look for a sparse approximate excitation using a compressed...... one with direct applications to coding but also consistent with the speech production model of voiced speech, where the excitation of the all-pole filter can be modeled as an impulse train, i.e., a sparse sequence. Introducing sparsity in the LP framework will also bring to de- velop the concept...... sensing formulation. Furthermore, we define a novel re-estimation procedure to adapt the predictor coefficients to the given sparse excitation, balancing the two representations in the context of speech coding. Finally, the advantages of the compact parametric representation of a segment of speech, given...

  6. Predictive coding of music--brain responses to rhythmic incongruity.

    Science.gov (United States)

    Vuust, Peter; Ostergaard, Leif; Pallesen, Karen Johanne; Bailey, Christopher; Roepstorff, Andreas

    2009-01-01

    During the last decades, models of music processing in the brain have mainly discussed the specificity of brain modules involved in processing different musical components. We argue that predictive coding offers an explanatory framework for functional integration in musical processing. Further, we provide empirical evidence for such a network in the analysis of event-related MEG-components to rhythmic incongruence in the context of strong metric anticipation. This is seen in a mismatch negativity (MMNm) and a subsequent P3am component, which have the properties of an error term and a subsequent evaluation in a predictive coding framework. There were both quantitative and qualitative differences in the evoked responses in expert jazz musicians compared with rhythmically unskilled non-musicians. We propose that these differences trace a functional adaptation and/or a genetic pre-disposition in experts which allows for a more precise rhythmic prediction.

  7. Predictive Coding Strategies for Developmental Neurorobotics

    Science.gov (United States)

    Park, Jun-Cheol; Lim, Jae Hyun; Choi, Hansol; Kim, Dae-Shik

    2012-01-01

    In recent years, predictive coding strategies have been proposed as a possible means by which the brain might make sense of the truly overwhelming amount of sensory data available to the brain at any given moment of time. Instead of the raw data, the brain is hypothesized to guide its actions by assigning causal beliefs to the observed error between what it expects to happen and what actually happens. In this paper, we present a variety of developmental neurorobotics experiments in which minimalist prediction error-based encoding strategies are utilize to elucidate the emergence of infant-like behavior in humanoid robotic platforms. Our approaches will be first naively Piagian, then move onto more Vygotskian ideas. More specifically, we will investigate how simple forms of infant learning, such as motor sequence generation, object permanence, and imitation learning may arise if minimizing prediction errors are used as objective functions. PMID:22586416

  8. Predictive Coding Strategies for Developmental Neurorobotics

    Directory of Open Access Journals (Sweden)

    Jun-Cheol ePark

    2012-05-01

    Full Text Available In recent years, predictive coding strategies have been proposed as a possible way of how the brain might make sense of the truly overwhelming amount of sensory data available to the brain at any given moment of time. Instead of the raw data, the brain is hypothesized to guide its actions by assigning causal believes to the observed error between what it expected to happen, and what actually happens. In this paper we present a potpourri of developmental neurorobotics experiments in which minimalist prediction-error based encoding strategies are utilize to elucidate the emergence of infant-like behavior in humanoid robotic platforms. Our approaches will be first naively Piagian, then move onto more Vygotskian ideas. More specifically, we will investigate how simple forms of infant learning such as motor sequence generation, object permanence, and imitation learning may arise if minimizing prediction errors are used as objective functions.

  9. Predictive codes of familiarity and context during the perceptual learning of facial identities

    Science.gov (United States)

    Apps, Matthew A. J.; Tsakiris, Manos

    2013-11-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  10. Measurement of kaons in the Helios experiment

    International Nuclear Information System (INIS)

    van Hecke, H.

    1990-01-01

    We have measured kaons and pions in the rapidity interval 0.8--1.3 and p T interval 100--600 MeV/c. We have observed an excess in the ratio of positive kaons to pions over what is expected from p-p for transverse momenta above 300 MeV/c. No such excess is seen for negatives. This excess of K/π increasing with p T has been observed in 14.5 GeV/A Si + Au collisions by E802 at the AGS. Though their data are concentrated at high p T , there is a small region of overlap between the coverage of HELIOS and E802 for p T = 0.4--0.5 GeV/c. In this region the K/π ratios for both positive and negative particles are very similar even though the bombarding energies differ by more than an order of magnitude. We find no strong indication that the rise in K/π above values expected from p-p collisions is higher for high-E T events than for low-E T events

  11. Establishment the code for prediction of waste volume on NPP decommissioning

    International Nuclear Information System (INIS)

    Cho, W. H.; Park, S. K.; Choi, Y. D.; Kim, I. S.; Moon, J. K.

    2013-01-01

    In practice, decommissioning waste volume can be estimated appropriately by finding the differences between prediction and actual operation and considering the operational problem or supplementary matters. So in the nuclear developed countries such as U.S. or Japan, the decommissioning waste volume is predicted on the basis of the experience in their own decommissioning projects. Because of the contamination caused by radioactive material, decontamination activity and management of radio-active waste should be considered in decommissioning of nuclear facility unlike the usual plant or facility. As the decommissioning activity is performed repeatedly, data for similar activities are accumulated, and optimal strategy can be achieved by comparison with the predicted strategy. Therefore, a variety of decommissioning experiences are the most important. In Korea, there is no data on the decommissioning of commercial nuclear power plants yet. However, KAERI has accumulated the basis decommissioning data of nuclear facility through decommissioning of research reactor (KRR-2) and uranium conversion plant (UCP). And DECOMMIS(DECOMMissioning Information Management System) was developed to provide and manage the whole data of decommissioning project. Two codes, FAC code and WBS code, were established in this process. FAC code is the one which is classified by decommissioning target of nuclear facility, and WBS code is classified by each decommissioning activity. The reason why two codes where created is that the codes used in DEFACS (Decommissioning Facility Characterization management System) and DEWOCS (Decommissioning Work-unit productivity Calculation System) are different from each other, and they were classified each purpose. DEFACS which manages the facility needs the code that categorizes facility characteristics, and DEWOCS which calculates unit productivity needs the code that categorizes decommissioning waste volume. KAERI has accumulated decommissioning data of KRR

  12. Least-Square Prediction for Backward Adaptive Video Coding

    Directory of Open Access Journals (Sweden)

    Li Xin

    2006-01-01

    Full Text Available Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP, and demonstrate its potential in video coding. Motivated by the duality between edge contour in images and motion trajectory in video, we propose to derive the best prediction of the current frame from its causal past using least-square method. It is demonstrated that LSP is particularly effective for modeling video material with slow motion and can be extended to handle fast motion by temporal warping and forward adaptation. For typical QCIF test sequences, LSP often achieves smaller MSE than , full-search, quarter-pel block matching algorithm (BMA without the need of transmitting any overhead.

  13. Time delay occultation data of the Helios spacecraft for probing the electron density distribution in the solar corona

    Science.gov (United States)

    Edenhofer, P.; Lueneburg, E.; Esposito, P. B.; Martin, W. L.; Zygielbaum, A. I.; Hansen, R. T.; Hansen, S. F.

    1978-01-01

    S-band time delay measurements were collected from the spacecraft Helios A and B during three solar occultations in 1975/76 within heliocentric distances of about 3 and 215 earth radius in terms of range, Doppler frequency shift, and electron content. Characteristic features of measurement and data processing are described. Typical data sets are discussed to probe the electron density distribution near the sun (west and east limb as well) including the outer and extended corona. Steady-state and dynamical aspects of the solar corona are presented and compared with earth-bound-K-coronagraph measurements. Using a weighted least squares estimation, parameters of an average coronal electron density profile are derived in a preliminary analysis to yield electron densities at r = 3, 65, 215 earth radius. Transient phenomena are discussed and a velocity of propagation v is nearly equal to 900 km/s is determined for plasma ejecta from a solar flare observed during an extraordinary set of Helios B electron content measurements.

  14. C code generation applied to nonlinear model predictive control for an artificial pancreas

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Jørgensen, John Bagterp

    2017-01-01

    This paper presents a method to generate C code from MATLAB code applied to a nonlinear model predictive control (NMPC) algorithm. The C code generation uses the MATLAB Coder Toolbox. It can drastically reduce the time required for development compared to a manual porting of code from MATLAB to C...

  15. Sonic boom predictions using a modified Euler code

    Science.gov (United States)

    Siclari, Michael J.

    1992-04-01

    The environmental impact of a next generation fleet of high-speed civil transports (HSCT) is of great concern in the evaluation of the commercial development of such a transport. One of the potential environmental impacts of a high speed civilian transport is the sonic boom generated by the aircraft and its effects on the population, wildlife, and structures in the vicinity of its flight path. If an HSCT aircraft is restricted from flying overland routes due to excessive booms, the commercial feasibility of such a venture may be questionable. NASA has taken the lead in evaluating and resolving the issues surrounding the development of a high speed civilian transport through its High-Speed Research Program (HSRP). The present paper discusses the usage of a Computational Fluid Dynamics (CFD) nonlinear code in predicting the pressure signature and ultimately the sonic boom generated by a high speed civilian transport. NASA had designed, built, and wind tunnel tested two low boom configurations for flight at Mach 2 and Mach 3. Experimental data was taken at several distances from these models up to a body length from the axis of the aircraft. The near field experimental data serves as a test bed for computational fluid dynamic codes in evaluating their accuracy and reliability for predicting the behavior of future HSCT designs. Sonic boom prediction methodology exists which is based on modified linear theory. These methods can be used reliably if near field signatures are available at distances from the aircraft where nonlinear and three dimensional effects have diminished in importance. Up to the present time, the only reliable method to obtain this data was via the wind tunnel with costly model construction and testing. It is the intent of the present paper to apply a modified three dimensional Euler code to predict the near field signatures of the two low boom configurations recently tested by NASA.

  16. Void fraction prediction of NUPEC PSBT tests by CATHARE code

    International Nuclear Information System (INIS)

    Del Nevo, A.; Michelotti, L.; Moretti, F.; Rozzia, D.; D'Auria, F.

    2011-01-01

    The current generation of thermal-hydraulic system codes benefits of about sixty years of experiments and forty years of development and are considered mature tools to provide best estimate description of phenomena and detailed reactor system representations. However, there are continuous needs for checking the code capabilities in representing nuclear system, for drawing attention to their weak points, for identifying models which need to be refined for best-estimate calculations. Prediction of void fraction and Departure from Nucleate Boiling (DNB) in system thermal-hydraulics is currently based on empirical approaches. The database carried out by Nuclear Power Engineering Corporation (NUPEC), Japan addresses these issues. It is suitable for supporting the development of new computational tools based on more mechanistic approaches (i.e. three-field codes, two-phase CFD, etc.) as well as for validating current generation of thermal-hydraulic system codes. Selected experiments belonging to this database are used for the OECD/NRC PSBT benchmark. The paper reviews the activity carried out by CATHARE2 code on the basis of the subchannel (four test sections) and presents rod bundle (different axial power profile and test sections) experiments available in the database in steady state and transient conditions. The results demonstrate the accuracy of the code in predicting the void fraction in different thermal-hydraulic conditions. The tests are performed varying the pressure, coolant temperature, mass flow and power. Sensitivity analyses are carried out addressing nodalization effect and the influence of the initial and boundary conditions of the tests. (author)

  17. Predictive Coding: A Possible Explanation of Filling-In at the Blind Spot

    Science.gov (United States)

    Raman, Rajani; Sarkar, Sandip

    2016-01-01

    Filling-in at the blind spot is a perceptual phenomenon in which the visual system fills the informational void, which arises due to the absence of retinal input corresponding to the optic disc, with surrounding visual attributes. It is known that during filling-in, nonlinear neural responses are observed in the early visual area that correlates with the perception, but the knowledge of underlying neural mechanism for filling-in at the blind spot is far from complete. In this work, we attempted to present a fresh perspective on the computational mechanism of filling-in process in the framework of hierarchical predictive coding, which provides a functional explanation for a range of neural responses in the cortex. We simulated a three-level hierarchical network and observe its response while stimulating the network with different bar stimulus across the blind spot. We find that the predictive-estimator neurons that represent blind spot in primary visual cortex exhibit elevated non-linear response when the bar stimulated both sides of the blind spot. Using generative model, we also show that these responses represent the filling-in completion. All these results are consistent with the finding of psychophysical and physiological studies. In this study, we also demonstrate that the tolerance in filling-in qualitatively matches with the experimental findings related to non-aligned bars. We discuss this phenomenon in the predictive coding paradigm and show that all our results could be explained by taking into account the efficient coding of natural images along with feedback and feed-forward connections that allow priors and predictions to co-evolve to arrive at the best prediction. These results suggest that the filling-in process could be a manifestation of the general computational principle of hierarchical predictive coding of natural images. PMID:26959812

  18. Computationally Efficient Amplitude Modulated Sinusoidal Audio Coding using Frequency-Domain Linear Prediction

    DEFF Research Database (Denmark)

    Christensen, M. G.; Jensen, Søren Holdt

    2006-01-01

    A method for amplitude modulated sinusoidal audio coding is presented that has low complexity and low delay. This is based on a subband processing system, where, in each subband, the signal is modeled as an amplitude modulated sum of sinusoids. The envelopes are estimated using frequency......-domain linear prediction and the prediction coefficients are quantized. As a proof of concept, we evaluate different configurations in a subjective listening test, and this shows that the proposed method offers significant improvements in sinusoidal coding. Furthermore, the properties of the frequency...

  19. Development of a mobile manipulator for nuclear plant disaster, HELIOS X. Mechanical design and basic experiments

    International Nuclear Information System (INIS)

    Noda, Satsuya; Hirose, Shigeo; Ueda, Koji; Nakano, Hisami; Horigome, Atsushi; Endo, Gen

    2016-01-01

    In places such as nuclear power plant disaster area, which it is difficult for human workers to enter, robots are required to scout those places instead of human workers. In this paper, we present a mobile manipulator HELIOS X for a nuclear plant decommissioning task. Firstly, we address demands and specifications for the robot, considering the mission of reconnaissance. Then we outline the system of the robot, mainly focusing on the following mechanism: 'Crank Wheel', 'Main Arm', 'Sphere Link Wrist', 'Camera Arm', 'Control System' and 'System architecture'. Especially, we installed 3 degree of freedom 'Camera Arm' on the 'Main Arm', in order to improve functionality of remote control system. This enables the operator to monitor both the gripper and its overall view of the robot. 'Camera Arm' helps the operator to recognize the distance from an object to the gripper, because the operator can interactively move the viewpoint of the camera, and monitor from another camera angle without changing the gripper's position. We confirmed the basic functionality of mobile base, 'Main Arm' and 'Camera Arm' through hardware experiments. We also demonstrated that HELIOS X could pass through the pull-to-open door with a substantial closing force when the operator watched camera view only. (author)

  20. DRAGON analysis of MOX fueled VVER cell benchmarks

    International Nuclear Information System (INIS)

    Marleau, G.; Foissac, F.

    2002-01-01

    The computational unit-cell benchmarks problems for LEU and MOX fueled VVER-1000 ('water-water energetic reactor') have been analyzed using the code DRAGON with ENDF/B-V and ENDF/B-VI based WIMS-AECL cross section libraries. The results obtained were compared with those generated using the SAS2H module of the SCALE-4.3 computational code system and with the code HELIOS. Good agreements between DRAGON and HELIOS were obtained when the ENDF/B-VI based library was considered while the ENDF/B-V DRAGON results were generally closer to those obtained using SAS2H. This study was useful for the verification of the DRAGON code and confirms that HELIOS and DRAGON have a similar behavior when compatible cross sections library are used. (author)

  1. Faraday rotation fluctutation spectra observed during solar occultation of the Helios spacecraft

    Science.gov (United States)

    Andreev, V.; Efimov, A. I.; Samoznaev, L.; Bird, M. K.

    1995-01-01

    Faraday rotation (FR) measurements using linearly polarized radio signals from the two Helios spacecraft were carried out during the period from 1975 to 1984. This paper presents the results of a spectral analysis of the Helios S-band FR fluctuations observed at heliocentric distances from 2.6 to 15 solar radii during the superior conjunctions 1975-1983. The mean intensity of the FR fluctuations does not exceed the noise level for solar offsets greater than ca. 15 solar radii. The rms FR fluctuation amplitude increases rapidly as the radio ray path approaches the Sun, varying according to a power law (exponent: 2.85 +/- 0.15) at solar distances 4-12 solar radii. At distances inside 4 solar radii the increase is even steeper (exponent: 5.6 +/- 0.2). The equivalent two-dimensional FR fluctuation spectrum is well modeled by a single power-law over the frequency range from 5 to 50 mHz. For heliocentric distances larger than 4 solar radii the spectral index varies between 1.1 and 1.6 with a mean value of 1.4 +/- 0.2, corresponding to a 3-D spectral index p = 2.4. FR fluctuations thus display a somwhat lower spectral index compared with phase and amplitude fluctuations. Surprisingly high values of the spectral index were found for measurements inside 4 solar radii (p = 2.9 +/- 0.2). This may arise from the increasingly dominant effect of the magnetic field on radio wave propagation at small solar offsets. Finally, a quasiperiodic component, believed to be associated with Alfven waves, was discovered in some (but not all!) fluctuation spectra observed simultaneously at two ground stations. Characteristic periods and bulk velocities of this component were 240 +/- 30 sec and 300 +/- 60 km/s, respectively.

  2. On Predictive Coding for Erasure Channels Using a Kalman Framework

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Murthi, Manohar; Andersen, Søren Vang

    2009-01-01

    We present a new design method for robust low-delay coding of autoregressive sources for transmission across erasure channels. It is a fundamental rethinking of existing concepts. It considers the encoder a mechanism that produces signal measurements from which the decoder estimates the original...... signal. The method is based on linear predictive coding and Kalman estimation at the decoder. We employ a novel encoder state-space representation with a linear quantization noise model. The encoder is represented by the Kalman measurement at the decoder. The presented method designs the encoder...... and decoder offline through an iterative algorithm based on closed-form minimization of the trace of the decoder state error covariance. The design method is shown to provide considerable performance gains, when the transmitted quantized prediction errors are subject to loss, in terms of signal-to-noise ratio...

  3. Dynamic divisive normalization predicts time-varying value coding in decision-related circuits.

    Science.gov (United States)

    Louie, Kenway; LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W

    2014-11-26

    Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. Copyright © 2014 the authors 0270-6474/14/3416046-12$15.00/0.

  4. Rudi Stamm'ler contributions and Dragon - 041

    International Nuclear Information System (INIS)

    Roy, R.; Marleau, G.; Hebert, A.

    2010-01-01

    The lattice code DRAGON has been in constant development over the last 25 years. During this period, the DRAGON development team has often been directly influenced by the excellent work of Rudi Stamm'ler. First, his book on reactor physics has inspired a large number of programming and calculation techniques that were implemented in DRAGON. Then, the work of Rudi and his collaborators on the lattice code HELIOS, has also prompted a friendly competition that lead us to continuously improve our code in such a way that it could match the performance achieved by HELIOS. This paper provides a description of some characteristics or technologies implemented in DRAGON that were influenced by the work of Rudi Stamm'ler. It also describes a Candu simulation exercise where the capabilities of the HELIOS and DRAGON codes were combined. (authors)

  5. Evaluation of the MMCLIFE 3.0 code in predicting crack growth in titanium aluminide composites

    International Nuclear Information System (INIS)

    Harmon, D.; Larsen, J.M.

    1999-01-01

    Crack growth and fatigue life predictions made with the MMCLIFE 3.0 code are compared to test data for unidirectional, continuously reinforced SCS-6/Ti-14Al-21Nb (wt pct) composite laminates. The MMCLIFE 3.0 analysis package is a design tool capable of predicting strength and fatigue performance in metal matrix composite (MMC) laminates. The code uses a combination of micromechanic lamina and macromechanic laminate analyses to predict stresses and uses linear elastic fracture mechanics to predict crack growth. The crack growth analysis includes a fiber bridging model to predict the growth of matrix flaws in 0 degree laminates and is capable of predicting the effects of interfacial shear stress and thermal residual stresses. The code has also been modified to include edge-notch flaws in addition to center-notch flaws. The model was correlated with constant amplitude, isothermal data from crack growth tests conducted on 0- and 90 degree SCS-6/Ti-14-21 laminates. Spectrum fatigue tests were conducted, which included dwell times and frequency effects. Strengths and areas for improvement for the analysis are discussed

  6. Helios1A EoL: A Success. For the first Time a Long Final Thrust Scenario, Respecting the French Law on Space Operations

    Science.gov (United States)

    Guerry, Agnes; Moussi, Aurelie; Sartine, Christian; Beaumet, Gregory

    2013-09-01

    HELIOS1A End Of Live (EOL) operations occurred in the early 2012. Through this EOL operation, CNES wanted to make an example of French Space Act compliance. Because the satellite wasn't natively designed for such an EOL phase, the operation was touchy and risky. It was organized as a real full project in order to assess every scenario details with dedicated Mission Analysis, to secure the operations through detailed risk analysis at system level and to consider the major failures that could occur during the EOL. A short scenario allowing to reach several objectives with benefits was eventually selected. The main objective of this project was to preserve space environment. The operations were led on a "best effort" basis. The French Space Operations Act (FSOA) requirements were met: HELIOS-1A EOL operations had been led successfully.

  7. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  8. Quantitative accuracy assessment of thermalhydraulic code predictions with SARBM

    International Nuclear Information System (INIS)

    Prosek, A.

    2001-01-01

    In recent years, the nuclear reactor industry has focused significant attention on nuclear reactor systems code accuracy and uncertainty issues. A few methods suitable to quantify code accuracy of thermalhydraulic code calculations were proposed and applied in the past. In this study a Stochastic Approximation Ratio Based Method (SARBM) was adapted and proposed for accuracy quantification. The objective of the study was to qualify the SARBM. The study compare the accuracy obtained by SARBM with the results obtained by widely used Fast Fourier Transform Based Method (FFTBM). The methods were applied to RELAP5/MOD3.2 code calculations of various BETHSY experiments. The obtained results showed that the SARBM was able to satisfactorily predict the accuracy of the calculated trends when visually comparing plots and comparing the results with the qualified FFTBM. The analysis also showed that the new figure-of-merit called accuracy factor (AF) is more convenient than stochastic approximation ratio for combining single variable accuracy's into total accuracy. The accuracy results obtained for the selected tests suggest that the acceptability factors for the SAR method were reasonably defined. The results also indicate that AF is a useful quantitative measure of accuracy.(author)

  9. Fast bi-directional prediction selection in H.264/MPEG-4 AVC temporal scalable video coding.

    Science.gov (United States)

    Lin, Hung-Chih; Hang, Hsueh-Ming; Peng, Wen-Hsiao

    2011-12-01

    In this paper, we propose a fast algorithm that efficiently selects the temporal prediction type for the dyadic hierarchical-B prediction structure in the H.264/MPEG-4 temporal scalable video coding (SVC). We make use of the strong correlations in prediction type inheritance to eliminate the superfluous computations for the bi-directional (BI) prediction in the finer partitions, 16×8/8×16/8×8 , by referring to the best temporal prediction type of 16 × 16. In addition, we carefully examine the relationship in motion bit-rate costs and distortions between the BI and the uni-directional temporal prediction types. As a result, we construct a set of adaptive thresholds to remove the unnecessary BI calculations. Moreover, for the block partitions smaller than 8 × 8, either the forward prediction (FW) or the backward prediction (BW) is skipped based upon the information of their 8 × 8 partitions. Hence, the proposed schemes can efficiently reduce the extensive computational burden in calculating the BI prediction. As compared to the JSVM 9.11 software, our method saves the encoding time from 48% to 67% for a large variety of test videos over a wide range of coding bit-rates and has only a minor coding performance loss. © 2011 IEEE

  10. DYMEL code for prediction of dynamic stability limits in boilers

    International Nuclear Information System (INIS)

    Deam, R.T.

    1980-01-01

    Theoretical and experimental studies of Hydrodynamic Instability in boilers were undertaken to resolve the uncertainties of the existing predictive methods at the time the first Advanced Gas Cooled Reactor (AGR) plant was commissioned. The experiments were conducted on a full scale electrical simulation of an AGR boiler and revealed inadequacies in existing methods. As a result a new computer code called DYMEL was developed based on linearisation and Fourier/Laplace Transformation of the one-dimensional boiler equations in both time and space. Beside giving good agreement with local experimental data, the DYMEL code has since shown agreement with stability data from the plant, sodium heated helical tubes, a gas heated helical tube and an electrically heated U-tube. The code is now used widely within the U.K. (author)

  11. FIRAC: a computer code to predict fire-accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Krause, F.R.; Tang, P.K.; Andrae, R.W.; Martin, R.A.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire

  12. Comparison of experimental pulse-height distributions in germanium detectors with integrated-tiger-series-code predictions

    International Nuclear Information System (INIS)

    Beutler, D.E.; Halbleib, J.A.; Knott, D.P.

    1989-01-01

    This paper reports pulse-height distributions in two different types of Ge detectors measured for a variety of medium-energy x-ray bremsstrahlung spectra. These measurements have been compared to predictions using the integrated tiger series (ITS) Monte Carlo electron/photon transport code. In general, the authors find excellent agreement between experiments and predictions using no free parameters. These results demonstrate that the ITS codes can predict the combined bremsstrahlung production and energy deposition with good precision (within measurement uncertainties). The one region of disagreement observed occurs for low-energy (<50 keV) photons using low-energy bremsstrahlung spectra. In this case the ITS codes appear to underestimate the produced and/or absorbed radiation by almost an order of magnitude

  13. Development of computer code for determining prediction parameters of radionuclide migration in soil layer

    International Nuclear Information System (INIS)

    Ogawa, Hiromichi; Ohnuki, Toshihiko

    1986-07-01

    A computer code (MIGSTEM-FIT) has been developed to determine the prediction parameters, retardation factor, water flow velocity, dispersion coefficient, etc., of radionuclide migration in soil layer from the concentration distribution of radionuclide in soil layer or in effluent. In this code, the solution of the predicting equation for radionuclide migration is compared with the concentration distribution measured, and the most adequate values of parameter can be determined by the flexible tolerance method. The validity of finite differential method, which was one of the method to solve the predicting equation, was confirmed by comparison with the analytical solution, and also the validity of fitting method was confirmed by the fitting of the concentration distribution calculated from known parameters. From the examination about the error, it was found that the error of the parameter obtained by using this code was smaller than that of the concentration distribution measured. (author)

  14. 3DCORE: Forward modeling of solar storm magnetic flux ropes for space weather prediction

    Science.gov (United States)

    Möstl, C.; Amerstorfer, T.; Palmerio, E.; Isavnin, A.; Farrugia, C. J.; Lowder, C.; Winslow, R. M.; Donnerer, J. M.; Kilpua, E. K. J.; Boakes, P. D.

    2018-05-01

    3DCORE forward models solar storm magnetic flux ropes called 3-Dimensional Coronal Rope Ejection (3DCORE). The code is able to produce synthetic in situ observations of the magnetic cores of solar coronal mass ejections sweeping over planets and spacecraft. Near Earth, these data are taken currently by the Wind, ACE and DSCOVR spacecraft. Other suitable spacecraft making these kind of observations carrying magnetometers in the solar wind were MESSENGER, Venus Express, MAVEN, and even Helios.

  15. RELAP5/MOD2 code modifications to obtain better predictions for the once-through steam generator

    International Nuclear Information System (INIS)

    Blanchat, T.; Hassan, Y.

    1989-01-01

    The steam generator is a major component in pressurized water reactors. Predicting the response of a steam generator during both steady-state and transient conditions is essential in studying the thermal-hydraulic behavior of a nuclear reactor coolant system. Therefore, many analytical and experimental efforts have been performed to investigate the thermal-hydraulic behavior of the steam generators during operational and accident transients. The objective of this study is to predict the behavior of the secondary side of the once-through steam generator (OTSG) using the RELAP5/MOD2 computer code. Steady-state conditions were predicted with the current version of the RELAP5/MOD2 code and compared with experimental plant data. The code predictions consistently underpredict the degree of superheat. A new interface friction model has been implemented in a modified version of RELAP5/MOD2. This modification, along with changes to the flow regime transition criteria and the heat transfer correlations, correctly predicts the degree of superheat and matches plant data

  16. MASTR: multiple alignment and structure prediction of non-coding RNAs using simulated annealing

    DEFF Research Database (Denmark)

    Lindgreen, Stinus; Gardner, Paul P; Krogh, Anders

    2007-01-01

    function that considers sequence conservation, covariation and basepairing probabilities. The results show that the method is very competitive to similar programs available today, both in terms of accuracy and computational efficiency. AVAILABILITY: Source code available from http://mastr.binf.ku.dk/......MOTIVATION: As more non-coding RNAs are discovered, the importance of methods for RNA analysis increases. Since the structure of ncRNA is intimately tied to the function of the molecule, programs for RNA structure prediction are necessary tools in this growing field of research. Furthermore......, it is known that RNA structure is often evolutionarily more conserved than sequence. However, few existing methods are capable of simultaneously considering multiple sequence alignment and structure prediction. RESULT: We present a novel solution to the problem of simultaneous structure prediction...

  17. Empirical models of the Solar Wind : Extrapolations from the Helios & Ulysses observations back to the corona

    Science.gov (United States)

    Maksimovic, M.; Zaslavsky, A.

    2017-12-01

    We will present extrapolation of the HELIOS & Ulysses proton density, temperature & bulk velocities back to the corona. Using simple mass flux conservations we show a very good agreement between these extrapolations and the current state knowledge of these parameters in the corona, based on SOHO mesurements. These simple extrapolations could potentially be very useful for the science planning of both the Parker Solar Probe and Solar Orbiter missions. Finally will also present some modelling considerations, based on simple energy balance equations which arise from these empirical observationnal models.

  18. TRANSENERGY S: computer codes for coolant temperature prediction in LMFBR cores during transient events

    International Nuclear Information System (INIS)

    Glazer, S.; Todreas, N.; Rohsenow, W.; Sonin, A.

    1981-02-01

    This document is intended as a user/programmer manual for the TRANSENERGY-S computer code. The code represents an extension of the steady state ENERGY model, originally developed by E. Khan, to predict coolant and fuel pin temperatures in a single LMFBR core assembly during transient events. Effects which may be modelled in the analysis include temporal variation in gamma heating in the coolant and duct wall, rod power production, coolant inlet temperature, coolant flow rate, and thermal boundary conditions around the single assembly. Numerical formulations of energy equations in the fuel and coolant are presented, and the solution schemes and stability criteria are discussed. A detailed description of the input deck preparation is presented, as well as code logic flowcharts, and a complete program listing. TRANSENERGY-S code predictions are compared with those of two different versions of COBRA, and partial results of a 61 pin bundle test case are presented

  19. IN-MACA-MCC: Integrated Multiple Attractor Cellular Automata with Modified Clonal Classifier for Human Protein Coding and Promoter Prediction

    Directory of Open Access Journals (Sweden)

    Kiran Sree Pokkuluri

    2014-01-01

    Full Text Available Protein coding and promoter region predictions are very important challenges of bioinformatics (Attwood and Teresa, 2000. The identification of these regions plays a crucial role in understanding the genes. Many novel computational and mathematical methods are introduced as well as existing methods that are getting refined for predicting both of the regions separately; still there is a scope for improvement. We propose a classifier that is built with MACA (multiple attractor cellular automata and MCC (modified clonal classifier to predict both regions with a single classifier. The proposed classifier is trained and tested with Fickett and Tung (1992 datasets for protein coding region prediction for DNA sequences of lengths 54, 108, and 162. This classifier is trained and tested with MMCRI datasets for protein coding region prediction for DNA sequences of lengths 252 and 354. The proposed classifier is trained and tested with promoter sequences from DBTSS (Yamashita et al., 2006 dataset and nonpromoters from EID (Saxonov et al., 2000 and UTRdb (Pesole et al., 2002 datasets. The proposed model can predict both regions with an average accuracy of 90.5% for promoter and 89.6% for protein coding region predictions. The specificity and sensitivity values of promoter and protein coding region predictions are 0.89 and 0.92, respectively.

  20. Improved predictions of nuclear reaction rates with the TALYS reaction code for astrophysical applications

    International Nuclear Information System (INIS)

    Goriely, S.; Hilaire, S.; Koning, A.J

    2008-01-01

    Context. Nuclear reaction rates of astrophysical applications are traditionally determined on the basis of Hauser-Feshbach reaction codes. These codes adopt a number of approximations that have never been tested, such as a simplified width fluctuation correction, the neglect of delayed or multiple-particle emission during the electromagnetic decay cascade, or the absence of the pre-equilibrium contribution at increasing incident energies. Aims. The reaction code TALYS has been recently updated to estimate the Maxwellian-averaged reaction rates that are of astrophysical relevance. These new developments enable the reaction rates to be calculated with increased accuracy and reliability and the approximations of previous codes to be investigated. Methods. The TALYS predictions for the thermonuclear rates of relevance to astrophysics are detailed and compared with those derived by widely-used codes for the same nuclear ingredients. Results. It is shown that TALYS predictions may differ significantly from those of previous codes, in particular for nuclei for which no or little nuclear data is available. The pre-equilibrium process is shown to influence the astrophysics rates of exotic neutron-rich nuclei significantly. For the first time, the Maxwellian- averaged (n, 2n) reaction rate is calculated for all nuclei and its competition with the radiative capture rate is discussed. Conclusions. The TALYS code provides a new tool to estimate all nuclear reaction rates of relevance to astrophysics with improved accuracy and reliability. (authors)

  1. Rotor Wake/Stator Interaction Noise Prediction Code Technical Documentation and User's Manual

    Science.gov (United States)

    Topol, David A.; Mathews, Douglas C.

    2010-01-01

    This report documents the improvements and enhancements made by Pratt & Whitney to two NASA programs which together will calculate noise from a rotor wake/stator interaction. The code is a combination of subroutines from two NASA programs with many new features added by Pratt & Whitney. To do a calculation V072 first uses a semi-empirical wake prediction to calculate the rotor wake characteristics at the stator leading edge. Results from the wake model are then automatically input into a rotor wake/stator interaction analytical noise prediction routine which calculates inlet aft sound power levels for the blade-passage-frequency tones and their harmonics, along with the complex radial mode amplitudes. The code allows for a noise calculation to be performed for a compressor rotor wake/stator interaction, a fan wake/FEGV interaction, or a fan wake/core stator interaction. This report is split into two parts, the first part discusses the technical documentation of the program as improved by Pratt & Whitney. The second part is a user's manual which describes how input files are created and how the code is run.

  2. The Cortical Organization of Speech Processing: Feedback Control and Predictive Coding the Context of a Dual-Stream Model

    Science.gov (United States)

    Hickok, Gregory

    2012-01-01

    Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…

  3. Information-Theoretic Evidence for Predictive Coding in the Face-Processing System.

    Science.gov (United States)

    Brodski-Guerniero, Alla; Paasch, Georg-Friedrich; Wollstadt, Patricia; Özdemir, Ipek; Lizier, Joseph T; Wibral, Michael

    2017-08-23

    Predictive coding suggests that the brain infers the causes of its sensations by combining sensory evidence with internal predictions based on available prior knowledge. However, the neurophysiological correlates of (pre)activated prior knowledge serving these predictions are still unknown. Based on the idea that such preactivated prior knowledge must be maintained until needed, we measured the amount of maintained information in neural signals via the active information storage (AIS) measure. AIS was calculated on whole-brain beamformer-reconstructed source time courses from MEG recordings of 52 human subjects during the baseline of a Mooney face/house detection task. Preactivation of prior knowledge for faces showed as α-band-related and β-band-related AIS increases in content-specific areas; these AIS increases were behaviorally relevant in the brain's fusiform face area. Further, AIS allowed decoding of the cued category on a trial-by-trial basis. Our results support accounts indicating that activated prior knowledge and the corresponding predictions are signaled in low-frequency activity (information our eyes/retina and other sensory organs receive from the outside world, but strongly depends also on information already present in our brains, such as prior knowledge about specific situations or objects. A currently popular theory in neuroscience, predictive coding theory, suggests that this prior knowledge is used by the brain to form internal predictions about upcoming sensory information. However, neurophysiological evidence for this hypothesis is rare, mostly because this kind of evidence requires strong a priori assumptions about the specific predictions the brain makes and the brain areas involved. Using a novel, assumption-free approach, we find that face-related prior knowledge and the derived predictions are represented in low-frequency brain activity. Copyright © 2017 the authors 0270-6474/17/378273-11$15.00/0.

  4. A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals

    Science.gov (United States)

    Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.

    1994-01-01

    Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.

  5. Foxp3+ Treg expanded from patients with established diabetes reduce Helios expression while retaining normal function compared to healthy individuals.

    Directory of Open Access Journals (Sweden)

    Weiting Du

    Full Text Available Foxp3(+ regulatory T cells (Treg play a crucial role in regulating immune tolerance. The use of Treg to restore immune tolerance is considered an attractive novel approach to inhibit autoimmune disease, including type 1 diabetes (T1D, and to prevent rejection of organ transplants. In view of the goal of developing autologous Treg-based cell therapy for patients with long-term (>15 years T1D, it will be necessary to expand a sufficient amount of functional Treg in vitro in order to study and compare Treg from T1D patients and healthy subjects. Our results have demonstrated that there is a comparable frequency of Treg in the peripheral blood lymphocytes (PBLs of patients with long-term T1D relative to those in healthy subjects; however, Th1 cells, but not Th17 cells, were increased in the T1D patients. Further, more Treg in PBLs from T1D patients than from healthy subjects expressed the CD45RO(+ memory cell phenotype, suggesting they were antigen-experienced cells. After isolation, Treg from both T1D patients and healthy subjects were successfully expanded with high purity. Although there was no difference in Helios expression on Treg in PBLs, in vitro expansion led to fewer Helios-expressing Treg from T1D patients than healthy subjects. While more Th1-like Treg expressing IFN-γ or TNF-α were found in the PBLs of T1D patients than healthy controls, there was no such difference in the expanded Treg. Importantly, expanded Treg from both subject groups were able to suppress autologous or allogeneic CD8(+ effector T cells equally well. Our findings demonstrate that a large number of ex vivo expanded functional Treg can be obtained from long-term T1D patients, although fewer expanded Treg expressed a high level of Helios. Thus, based on the positive outcomes, these potent expanded Treg from diabetic human patients may be useful in treating T1D or preventing islet graft rejection.

  6. Helios expression and Foxp3 TSDR methylation of IFNy+ and IFNy- Treg from kidney transplant recipients with good long-term graft function.

    Directory of Open Access Journals (Sweden)

    Karina Trojan

    Full Text Available There is circumstantial evidence that IFNy+ Treg might have clinical relevance in transplantation. IFNy+ Treg express IFNy receptors and are induced by IFNy. In the present study we investigated in kidney transplant recipients with good long-term stable graft function the absolute cell counts of IFNy+ Treg subsets and whether their expression of Foxp3 is stable or transient.Helios expression determined by eight-color-fluorescence flow cytometry and methylation status of the Foxp3 Treg specific demethylation region (TSDR served as indicators for stability of Foxp3 expression. Methylation status was investigated in enriched IFNy+ and IFNy- Treg preparations originating from peripheral blood using high resolution melt analysis. A total of 136 transplant recipients and 52 healthy controls were studied.Proportions of IFNy+ Treg were similar in patients and healthy controls (0.05% and 0.04% of all CD4+ lymphocytes; p = n.s.. Patients also had similar absolute counts of IFNy producing Helios+ and Helios- Treg (p = n.s.. Most of the IFNy+ and IFNy- Treg in transplant recipients had a methylated Foxp3 TSDR, however, there was a sizeable proportion of IFNy+ and IFNy- Treg with demethylated Foxp3 TSDR. Male and female patients showed more frequently methylated IFNy+ and IFNy- Treg than male and female controls (all p<0.05.Kidney transplant recipients with good long-term stable graft function have similar levels of IFNy+ Treg as healthy controls. IFNy+ and IFNy- Treg subsets in patients consist of cells with stable and cells with transient Foxp3 expression; however, patients showed more frequently methylated IFNy+ and IFNy- Treg than controls. The data show increased levels of Treg subsets with stable as well as transient Foxp3 expression in patients with stable allograft acceptance compared to healthy controls.

  7. Gaseous saturable absorbers for the Helios CO2 laser system

    International Nuclear Information System (INIS)

    Haglund, R.F. Jr.; Nowak, A.V.; Czuchlewski, S.J.

    1981-01-01

    Saturable absorbers are widely used to suppress parasitic oscillations in large-aperture, high-power CO 2 fusion-laser systems. We report experimental results on SF 6 -based gaseous saturable absorbers used for parasitic suppression in the eight-beam, 10 kJ Helios fusion-laser system. The gas mix effectively quenches self-lasing in the 9 and 10 μm branches of the CO 2 laser spectrum while simultaneously allowing high transmission of subnanosecond multiwavelength pulses for target-irradiation experiments. The gas isolator now in use consists of SF 6 and the additional fluorocarbons: 1, 1-difluoroethane (FC-152a); dichlorodifluoromethane (FC-12); chloropentafluoroethane (FC-115); 1,1-dichloro 2,2-difluoroethylene (FC-1112a); chlorotrifluoroethylene (FC-1113); and perfluorocyclobutane (FC-C318). The saturation of the mix was studied as a function of incident fluence, pressure, cell length, and incident wavelength. Experimental results are presented on the saturation properties of pure SF 6 and FC-152a and compared with the saturation behavior of CO 2 at 400 0 C

  8. Analytic solution to verify code predictions of two-phase flow in a boiling water reactor core channel

    International Nuclear Information System (INIS)

    Chen, K.F.; Olson, C.A.

    1983-01-01

    One reliable method that can be used to verify the solution scheme of a computer code is to compare the code prediction to a simplified problem for which an analytic solution can be derived. An analytic solution for the axial pressure drop as a function of the flow was obtained for the simplified problem of homogeneous equilibrium two-phase flow in a vertical, heated channel with a cosine axial heat flux shape. This analytic solution was then used to verify the predictions of the CONDOR computer code, which is used to evaluate the thermal-hydraulic performance of boiling water reactors. The results show excellent agreement between the analytic solution and CONDOR prediction

  9. Biocomputational prediction of small non-coding RNAs in Streptomyces

    Czech Academy of Sciences Publication Activity Database

    Pánek, Josef; Bobek, Jan; Mikulík, Karel; Basler, Marek; Vohradský, Jiří

    2008-01-01

    Roč. 9, č. 217 (2008), s. 1-14 ISSN 1471-2164 R&D Projects: GA ČR GP204/07/P361; GA ČR GA203/05/0106; GA ČR GA310/07/1009 Grant - others:XE(XE) EC Integrated Project ActinoGEN, LSHM-CT-2004-005224. Institutional research plan: CEZ:AV0Z50200510 Keywords : non-coding RNA * streptomyces * biocomputational prediction Subject RIV: IN - Informatics, Computer Science Impact factor: 3.926, year: 2008

  10. A Cerebellar Framework for Predictive Coding and Homeostatic Regulation in Depressive Disorder.

    Science.gov (United States)

    Schutter, Dennis J L G

    2016-02-01

    Depressive disorder is associated with abnormalities in the processing of reward and punishment signals and disturbances in homeostatic regulation. These abnormalities are proposed to impair error minimization routines for reducing uncertainty. Several lines of research point towards a role of the cerebellum in reward- and punishment-related predictive coding and homeostatic regulatory function in depressive disorder. Available functional and anatomical evidence suggests that in addition to the cortico-limbic networks, the cerebellum is part of the dysfunctional brain circuit in depressive disorder as well. It is proposed that impaired cerebellar function contributes to abnormalities in predictive coding and homeostatic dysregulation in depressive disorder. Further research on the role of the cerebellum in depressive disorder may further extend our knowledge on the functional and neural mechanisms of depressive disorder and development of novel antidepressant treatments strategies targeting the cerebellum.

  11. Rhythmic complexity and predictive coding: A novel approach to modeling rhythm and meter perception in music

    Directory of Open Access Journals (Sweden)

    Peter eVuust

    2014-10-01

    Full Text Available Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of predictive coding, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a predictive coding model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (‘rhythm’ and the brain’s anticipatory structuring of music (‘meter’. Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the predictive coding theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms.

  12. A curious fact: Photic sneeze reflex. Autosomical dominant compelling helio-ophthalmic outburst syndrome.

    Science.gov (United States)

    Sevillano, C; Parafita-Fernández, A; Rodriguez-Lopez, V; Sampil, M; Moraña, N; Viso, E; Cores, F J

    2016-07-01

    To assess ocular involvement in the pathophysiology of autosomal dominant compelling helio-ophthalmic outburst syndrome (ACHOOs). An interview was conducted with a Caucasian family that showed clinical features of ACHOOs. Twelve of them had photic reflex and were recruited. A complete eye evaluation was made. A dominant autosomal inheritance with mild penetrance was demonstrated, with 67% of the studied subjects showing some degree of prominent corneal nerves. No other eye changes were found. Prominent corneal nerves may be associated with ACHOOs. The other eye structures studied do not seem to play a role in ACHOOs. Further studies are needed to understand the physiology of the ACHOOs. Copyright © 2016 Sociedad Española de Oftalmología. Published by Elsevier España, S.L.U. All rights reserved.

  13. PCCE-A Predictive Code for Calorimetric Estimates in actively cooled components affected by pulsed power loads

    International Nuclear Information System (INIS)

    Agostinetti, P.; Palma, M. Dalla; Fantini, F.; Fellin, F.; Pasqualotto, R.

    2011-01-01

    The analytical interpretative models for calorimetric measurements currently available in the literature can consider close systems in steady-state and transient conditions, or open systems but only in steady-state conditions. The PCCE code (Predictive Code for Calorimetric Estimations), here presented, introduces some novelties. In fact, it can simulate with an analytical approach both the heated component and the cooling circuit, evaluating the heat fluxes due to conductive and convective processes both in steady-state and transient conditions. The main goal of this code is to model heating and cooling processes in actively cooled components of fusion experiments affected by high pulsed power loads, that are not easily analyzed with purely numerical approaches (like Finite Element Method or Computational Fluid Dynamics). A dedicated mathematical formulation, based on concentrated parameters, has been developed and is here described in detail. After a comparison and benchmark with the ANSYS commercial code, the PCCE code is applied to predict the calorimetric parameters in simple scenarios of the SPIDER experiment.

  14. FIRAC - a computer code to predict fire accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Foster, R.D.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire. A basic material transport capability that features the effects of convection, deposition, entrainment, and filtration of material is included. The interrelated effects of filter plugging, heat transfer, gas dynamics, and material transport are taken into account. In this paper the authors summarize the physical models used to describe the gas dynamics, material transport, and heat transfer processes. They also illustrate how a typical facility is modeled using the code

  15. Novel Intermode Prediction Algorithm for High Efficiency Video Coding Encoder

    Directory of Open Access Journals (Sweden)

    Chan-seob Park

    2014-01-01

    Full Text Available The joint collaborative team on video coding (JCT-VC is developing the next-generation video coding standard which is called high efficiency video coding (HEVC. In the HEVC, there are three units in block structure: coding unit (CU, prediction unit (PU, and transform unit (TU. The CU is the basic unit of region splitting like macroblock (MB. Each CU performs recursive splitting into four blocks with equal size, starting from the tree block. In this paper, we propose a fast CU depth decision algorithm for HEVC technology to reduce its computational complexity. In 2N×2N PU, the proposed method compares the rate-distortion (RD cost and determines the depth using the compared information. Moreover, in order to speed up the encoding time, the efficient merge SKIP detection method is developed additionally based on the contextual mode information of neighboring CUs. Experimental result shows that the proposed algorithm achieves the average time-saving factor of 44.84% in the random access (RA at Main profile configuration with the HEVC test model (HM 10.0 reference software. Compared to HM 10.0 encoder, a small BD-bitrate loss of 0.17% is also observed without significant loss of image quality.

  16. Modification V to the computer code, STRETCH, for predicting coated-particle behavior

    International Nuclear Information System (INIS)

    Valentine, K.H.

    1975-04-01

    Several modifications have been made to the stress analysis code, STRETCH, in an attempt to improve agreement between the calculated and observed behavior of pyrocarbon-coated fuel particles during irradiation in a reactor environment. Specific areas of the code that have been modified are the neutron-induced densification model and the neutron-induced creep calculation. Also, the capability for modeling surface temperature variations has been added. HFIR Target experiments HT-12 through HT-15 have been simulated with the modified code, and the neutron-fluence vs particle-failure predictions compare favorably with the experimental results. Listings of the modified FORTRAN IV main source program and additional FORTRAN IV functions are provided along with instructions for supplying the additional input data. (U.S.)

  17. A Predictive Coding Account of Psychotic Symptoms in Autism Spectrum Disorder

    Science.gov (United States)

    van Schalkwyk, Gerrit I.; Volkmar, Fred R.; Corlett, Philip R.

    2017-01-01

    The co-occurrence of psychotic and autism spectrum disorder (ASD) symptoms represents an important clinical challenge. Here we consider this problem in the context of a computational psychiatry approach that has been applied to both conditions--predictive coding. Some symptoms of schizophrenia have been explained in terms of a failure of top-down…

  18. Software Code Smell Prediction Model Using Shannon, Rényi and Tsallis Entropies

    Directory of Open Access Journals (Sweden)

    Aakanshi Gupta

    2018-05-01

    Full Text Available The current era demands high quality software in a limited time period to achieve new goals and heights. To meet user requirements, the source codes undergo frequent modifications which can generate the bad smells in software that deteriorate the quality and reliability of software. Source code of the open source software is easily accessible by any developer, thus frequently modifiable. In this paper, we have proposed a mathematical model to predict the bad smells using the concept of entropy as defined by the Information Theory. Open-source software Apache Abdera is taken into consideration for calculating the bad smells. Bad smells are collected using a detection tool from sub components of the Apache Abdera project, and different measures of entropy (Shannon, Rényi and Tsallis entropy. By applying non-linear regression techniques, the bad smells that can arise in the future versions of software are predicted based on the observed bad smells and entropy measures. The proposed model has been validated using goodness of fit parameters (prediction error, bias, variation, and Root Mean Squared Prediction Error (RMSPE. The values of model performance statistics ( R 2 , adjusted R 2 , Mean Square Error (MSE and standard error also justify the proposed model. We have compared the results of the prediction model with the observed results on real data. The results of the model might be helpful for software development industries and future researchers.

  19. An adaptive mode-driven spatiotemporal motion vector prediction for wavelet video coding

    Science.gov (United States)

    Zhao, Fan; Liu, Guizhong; Qi, Yong

    2010-07-01

    The three-dimensional subband/wavelet codecs use 5/3 filters rather than Haar filters for the motion compensation temporal filtering (MCTF) to improve the coding gain. In order to curb the increased motion vector rate, an adaptive motion mode driven spatiotemporal motion vector prediction (AMDST-MVP) scheme is proposed. First, by making use of the direction histograms of four motion vector fields resulting from the initial spatial motion vector prediction (SMVP), the motion mode of the current GOP is determined according to whether the fast or complex motion exists in the current GOP. Then the GOP-level MVP scheme is thereby determined by either the S-MVP or the AMDST-MVP, namely, AMDST-MVP is the combination of S-MVP and temporal-MVP (T-MVP). If the latter is adopted, the motion vector difference (MVD) between the neighboring MV fields and the S-MVP resulting MV of the current block is employed to decide whether or not the MV of co-located block in the previous frame is used for prediction the current block. Experimental results show that AMDST-MVP not only can improve the coding efficiency but also reduce the number of computation complexity.

  20. Comparison of LIFE-4 and TEMECH code predictions with TREAT transient test data

    International Nuclear Information System (INIS)

    Gneiting, B.C.; Bard, F.E.; Hunter, C.W.

    1984-09-01

    Transient tests in the TREAT reactor were performed on FFTF Reference design mixed-oxide fuel pins, most of which had received prior steady-state irradiation in the EBR-II reactor. These transient test results provide a data base for calibration and verification of fuel performance codes and for evaluation of processes that affect pin damage during transient events. This paper presents a comparison of the LIFE-4 and TEMECH fuel pin thermal/mechanical analysis codes with the results from 20 HEDL TREAT experiments, ten of which resulted in pin failure. Both the LIFE-4 and TEMECH codes provided an adequate representation of the thermal and mechanical data from the TREAT experiments. Also, a criterion for 50% probability of pin failure was developed for each code using an average cumulative damage fraction value calculated for the pins that failed. Both codes employ the two major cladding loading mechanisms of differential thermal expansion and central cavity pressurization which were demonstrated by the test results. However, a detailed evaluation of the code predictions shows that the two code systems weigh the loading mechanism differently to reach the same end points of the TREAT transient results

  1. La corona radiata de Helios-Sol como símbolo de poder en la Cultura Visual Romana

    Directory of Open Access Journals (Sweden)

    Jorge Tomás García

    2017-12-01

    Full Text Available El presente trabajo pretende analizar la presencia del motivo iconográfico de la corona radiata en la cultura visual romana como símbolo de poder. Para ello, analizaremos la figura mitológica de Helios, y sus múltiples variantes en las fuentes clásicas, especialmente aquellas que más la relacionan con la divinidad del Sol en el mundo romano. Las principales categorías de interpretación de la iconografía del Sol en la cultura visual romana enriquecen las variantes iconológicas de la presencia de la corona radiata. Así, pretendemos analizar la naturaleza real o simbólica de este atributo iconográfico tan presente desde la época de Augusto como símbolo de poder y luz ligado a la realía imperial. This article aims to analyze the presence of the iconographic motif of the corona radiata in the Roman visual culture as a symbol of power. For this, we will analyze the mythological figure of Helios, and its multiple variants in the classical sources, especially those that relate more to the divinity of the Sun in the Roman world. e main categories of interpretation of the Sun's iconography in the Roman visual culture enrich the iconological variants of the presence of the corona radiata. us, we intend to analyze the real or symbolic nature of this iconographic attribute so present since the time of Augustus as a symbol of power and light linked to the imperial realia.

  2. Using self-similarity compensation for improving inter-layer prediction in scalable 3D holoscopic video coding

    Science.gov (United States)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2013-09-01

    Holoscopic imaging, also known as integral imaging, has been recently attracting the attention of the research community, as a promising glassless 3D technology due to its ability to create a more realistic depth illusion than the current stereoscopic or multiview solutions. However, in order to gradually introduce this technology into the consumer market and to efficiently deliver 3D holoscopic content to end-users, backward compatibility with legacy displays is essential. Consequently, to enable 3D holoscopic content to be delivered and presented on legacy displays, a display scalable 3D holoscopic coding approach is required. Hence, this paper presents a display scalable architecture for 3D holoscopic video coding with a three-layer approach, where each layer represents a different level of display scalability: Layer 0 - a single 2D view; Layer 1 - 3D stereo or multiview; and Layer 2 - the full 3D holoscopic content. In this context, a prediction method is proposed, which combines inter-layer prediction, aiming to exploit the existing redundancy between the multiview and the 3D holoscopic layers, with self-similarity compensated prediction (previously proposed by the authors for non-scalable 3D holoscopic video coding), aiming to exploit the spatial redundancy inherent to the 3D holoscopic enhancement layer. Experimental results show that the proposed combined prediction can improve significantly the rate-distortion performance of scalable 3D holoscopic video coding with respect to the authors' previously proposed solutions, where only inter-layer or only self-similarity prediction is used.

  3. Thought insertion as a self-disturbance: An integration of predictive coding and phenomenological approaches

    Directory of Open Access Journals (Sweden)

    Philipp Sterzer

    2016-10-01

    Full Text Available Current theories in the framework of hierarchical predictive coding propose that positive symptoms of schizophrenia, such as delusions and hallucinations, arise from an alteration in Bayesian inference, the term inference referring to a process by which learned predictions are used to infer probable causes of sensory data. However, for one particularly striking and frequent symptom of schizophrenia, thought insertion, no plausible account has been proposed in terms of the predictive-coding framework. Here we propose that thought insertion is due to an altered experience of thoughts as coming from nowhere, as is already indicated by the early 20th century phenomenological accounts by the early Heidelberg School of psychiatry. These accounts identified thought insertion as one of the self-disturbances (from German: Ichstörungen of schizophrenia and used mescaline as a model-psychosis in healthy individuals to explore the possible mechanisms. The early Heidelberg School (Gruhle, Mayer-Gross, Beringer first named and defined the self-disturbances, and proposed that thought insertion involves a disruption of the inner connectedness of thoughts and experiences, and a becoming sensory of those thoughts experienced as inserted. This account offers a novel way to integrate the phenomenology of thought insertion with the predictive coding framework. We argue that the altered experience of thoughts may be caused by a reduced precision of context-dependent predictions, relative to sensory precision. According to the principles of Bayesian inference, this reduced precision leads to increased prediction-error signals evoked by the neural activity that encodes thoughts. Thus, in analogy with the prediction-error related aberrant salience of external events that has been proposed previously, internal events such as thoughts (including volitions, emotions and memories can also be associated with increased prediction-error signaling and are thus imbued with

  4. Modelling of the Gadolinium Fuel Test IFA-681 using the BISON Code

    Energy Technology Data Exchange (ETDEWEB)

    Pastore, Giovanni [Idaho National Laboratory; Hales, Jason Dean [Idaho National Laboratory; Novascone, Stephen Rhead [Idaho National Laboratory; Spencer, Benjamin Whiting [Idaho National Laboratory; Williamson, Richard L [Idaho National Laboratory

    2016-05-01

    In this work, application of Idaho National Laboratory’s fuel performance code BISON to modelling of fuel rods from the Halden IFA-681 gadolinium fuel test is presented. First, an overview is given of BISON models, focusing on UO2/UO2-Gd2O3 fuel and Zircaloy cladding. Then, BISON analyses of selected fuel rods from the IFA-681 test are performed. For the first time in a BISON application to integral fuel rod simulations, the analysis is informed by detailed neutronics calculations in order to accurately capture the radial power profile throughout the fuel, which is strongly affected by the complex evolution of absorber Gd isotopes. In particular, radial power profiles calculated at IFE–Halden Reactor Project with the HELIOS code are used. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project. Some slide have been added as an Appendix to present the newly developed PolyPole-1 algorithm for modeling of intra-granular fission gas release.

  5. Coding Scheme for Assessment of Students’ Explanations and Predictions

    Directory of Open Access Journals (Sweden)

    Mihael Gojkošek

    2017-04-01

    Full Text Available In the process of analyzing students’ explanations and predictions for interaction between brightness enhancement film and beam of white light, a need for objective and reliable assessment instrumentarose. Consequently, we developed a codingscheme that was mostly inspired by the rubrics for self-assessment of scientific abilities. In the paper we present the grading categories that were integrated in the coding scheme, and descriptions of criteria used for evaluation of students work. We report the results of reliability analysis of new assessment tool and present some examples of its application.

  6. A 3D transport-based core analysis code for research reactors with unstructured geometry

    International Nuclear Information System (INIS)

    Zhang, Tengfei; Wu, Hongchun; Zheng, Youqi; Cao, Liangzhi; Li, Yunzhao

    2013-01-01

    Highlights: • A core analysis code package based on 3D neutron transport calculation in complex geometry is developed. • The fine considerations on flux mapping, control rod effects and isotope depletion are modeled. • The code is proved to be with high accuracy and capable of handling flexible operational cases for research reactors. - Abstract: As an effort to enhance the accuracy in simulating the operations of research reactors, a 3D transport core analysis code system named REFT was developed. HELIOS is employed due to the flexibility of describing complex geometry. A 3D triangular nodal S N method transport solver, DNTR, endows the package the capability of modeling cores with unstructured geometry assemblies. A series of dedicated methods were introduced to meet the requirements of research reactor simulations. Afterwards, to make it more user friendly, a graphical user interface was also developed for REFT. In order to validate the developed code system, the calculated results were compared with the experimental results. Both the numerical and experimental results are in close agreement with each other, with the relative errors of k eff being less than 0.5%. Results for depletion calculations were also verified by comparing them with the experimental data and acceptable consistency was observed in results

  7. Helios: a Multi-Purpose LIDAR Simulation Framework for Research, Planning and Training of Laser Scanning Operations with Airborne, Ground-Based Mobile and Stationary Platforms

    Science.gov (United States)

    Bechtold, S.; Höfle, B.

    2016-06-01

    In many technical domains of modern society, there is a growing demand for fast, precise and automatic acquisition of digital 3D models of a wide variety of physical objects and environments. Laser scanning is a popular and widely used technology to cover this demand, but it is also expensive and complex to use to its full potential. However, there might exist scenarios where the operation of a real laser scanner could be replaced by a computer simulation, in order to save time and costs. This includes scenarios like teaching and training of laser scanning, development of new scanner hardware and scanning methods, or generation of artificial scan data sets to support the development of point cloud processing and analysis algorithms. To test the feasibility of this idea, we have developed a highly flexible laser scanning simulation framework named Heidelberg LiDAR Operations Simulator (HELIOS). HELIOS is implemented as a Java library and split up into a core component and multiple extension modules. Extensible Markup Language (XML) is used to define scanner, platform and scene models and to configure the behaviour of modules. Modules were developed and implemented for (1) loading of simulation assets and configuration (i.e. 3D scene models, scanner definitions, survey descriptions etc.), (2) playback of XML survey descriptions, (3) TLS survey planning (i.e. automatic computation of recommended scanning positions) and (4) interactive real-time 3D visualization of simulated surveys. As a proof of concept, we show the results of two experiments: First, a survey planning test in a scene that was specifically created to evaluate the quality of the survey planning algorithm. Second, a simulated TLS scan of a crop field in a precision farming scenario. The results show that HELIOS fulfills its design goals.

  8. Improved predictions of nuclear reaction rates for astrophysics applications with the TALYS reaction code

    International Nuclear Information System (INIS)

    Goriely, S.; Hilaire, S.; Koning, A.J.

    2008-01-01

    Nuclear reaction rates for astrophysics applications are traditionally determined on the basis of Hauser-Feshbach reaction codes, like MOST. These codes use simplified schemes to calculate the capture reaction cross section on a given target nucleus, not only in its ground state but also on the different thermally populated states of the stellar plasma at a given temperature. Such schemes include a number of approximations that have never been tested, such as an approximate width fluctuation correction, the neglect of delayed particle emission during the electromagnetic decay cascade or the absence of the pre-equilibrium contribution at increasing incident energies. New developments have been brought to the reaction code TALYS to estimate the Maxwellian-averaged reaction rates of astrophysics relevance. These new developments give us the possibility to calculate with an improved accuracy the reaction cross sections and the corresponding astrophysics rates. The TALYS predictions for the thermonuclear rates of astrophysics relevance are presented and compared with those obtained with the MOST code on the basis of the same nuclear ingredients for nuclear structure properties, optical model potential, nuclear level densities and γ-ray strength. It is shown that, in particular, the pre-equilibrium process significantly influences the astrophysics rates of exotic neutron-rich nuclei. The reciprocity theorem traditionally used in astrophysics to determine photo-rates is also shown no to be valid for exotic nuclei. The predictions obtained with different nuclear inputs are also analyzed to provide an estimate of the theoretical uncertainties still affecting the reaction rate prediction far away from the experimentally known regions. (authors)

  9. Measurement of the inertial properties of the Helios F-1 spacecraft

    Science.gov (United States)

    Gayman, W. H.

    1975-01-01

    A gravity pendulum method of measuring lateral moments of inertia of large structures with an error of less than 1% is outlined. The method is based on the fact that in a physical pendulum with a knife-edge support the distance from the axis of rotation to the system center of gravity determines the minimal period of oscillation and is equal to the system centroidal radius of gyration. The method is applied to results of a test procedure in which the Helios F-1 spacecraft was placed in a roll fixture with crossed flexure pivots as elastic constraints and system oscillation measurements were made with each of a set of added moment-of-inertia increments. Equations of motion are derived with allowance for the effect of the finite pivot radius and an error analysis is carried out to find the criterion for maximum accuracy in determining the square of the centroidal radius of gyration. The test procedure allows all measurements to be made with the specimen in upright position.

  10. Chromosome preference of disease genes and vectorization for the prediction of non-coding disease genes.

    Science.gov (United States)

    Peng, Hui; Lan, Chaowang; Liu, Yuansheng; Liu, Tao; Blumenstein, Michael; Li, Jinyan

    2017-10-03

    Disease-related protein-coding genes have been widely studied, but disease-related non-coding genes remain largely unknown. This work introduces a new vector to represent diseases, and applies the newly vectorized data for a positive-unlabeled learning algorithm to predict and rank disease-related long non-coding RNA (lncRNA) genes. This novel vector representation for diseases consists of two sub-vectors, one is composed of 45 elements, characterizing the information entropies of the disease genes distribution over 45 chromosome substructures. This idea is supported by our observation that some substructures (e.g., the chromosome 6 p-arm) are highly preferred by disease-related protein coding genes, while some (e.g., the 21 p-arm) are not favored at all. The second sub-vector is 30-dimensional, characterizing the distribution of disease gene enriched KEGG pathways in comparison with our manually created pathway groups. The second sub-vector complements with the first one to differentiate between various diseases. Our prediction method outperforms the state-of-the-art methods on benchmark datasets for prioritizing disease related lncRNA genes. The method also works well when only the sequence information of an lncRNA gene is known, or even when a given disease has no currently recognized long non-coding genes.

  11. Status of development of a code for predicting the migration of ground additions - MOGRA

    International Nuclear Information System (INIS)

    Amano, Hikaru; Uchida, Shigeo; Matsuoka, Syungo; Ikeda, Hiroshi; Hayashi, Hiroko; Kurosawa, Naohiro

    2003-01-01

    MOGRA (Migration Of GRound Additions) is a migration prediction code for toxic ground additions including radioactive materials in a terrestrial environment. MOGRA consists of computational codes that are applicable to various evaluation target systems, and can be used on personal computers. The computational code has the dynamic compartment analysis block at its core, the graphical user interface (GUI) for computation parameter settings and results displays, data bases and so on. The compartments are obtained by classifying various natural environments into groups that exhibit similar properties. These codes are able to create or delete compartments and set the migration of environmental-load substances between compartments by a simple mouse operation. The system features universality and excellent expandability in the application of computations to various nuclides. (author)

  12. Experiment predictions of LOFT reflood behavior using the RELAP4/MOD6 code

    International Nuclear Information System (INIS)

    Lin, J.C.; Kee, E.J.; Grush, W.H.; White, J.R.

    1978-01-01

    The RELAP4/MOD6 computer code was used to predict the thermal-hydraulic transient for Loss-of-Fluid Test (LOFT) Loss-of-Coolant Accident (LOCA) experiments L2-2, L2-3, and L2-4. This analysis will aid in the development and assessment of analytical models used to analyze the LOCA performance of commercial power reactors. Prior to performing experiments in the LOFT facility, the experiments are modeled in counterpart tests performed in the nonnuclear Semiscale MOD 1 facility. A comparison of the analytical results with Semiscale data will verify the analytical capability of the RELAP4 code to predict the thermal-hydraulic behavior of the Semiscale LOFT counterpart tests. The analytical model and the results of analyses for the reflood portion of the LOFT LOCA experiments are described. These results are compared with the data from Semiscale

  13. A code MOGRA for predicting and assessing the migration of ground additions

    International Nuclear Information System (INIS)

    Amano, Hikaru; Atarashi-Andoh, Mariko; Uchida, Shigeo; Matsuoka, Syungo; Ikeda, Hiroshi; Hayashi, Hiroko; Kurosawa, Naohiro

    2004-01-01

    The environment should be protected from the toxic effects of not only ionizing radiation but also any other environmental load materials. A Code MOGRA (Migration Of GRound Additions) is a migration prediction code for toxic ground additions including radioactive materials in a terrestrial environment, which consists of computational codes that are applicable to various evaluation target systems, and can be used on personal computers for not only the purpose of the migration analysis but also the environmental assessment to livings of the environmental load materials. The functionality of MOGRA has been verified by applying it in the analyses of the migration rates of radioactive substances from the atmosphere to soils and plants and flow rates into the rivers. Migration of radionuclides in combinations of hypothetical various land utilization areas was also verified. The system can analyze the dynamic changes of target radionuclide's concentrations in each compartment, fluxes from one compartment to another compartment. The code MOGRA has varieties of databases, which is included in an additional code MOGRA-DB. This additional code MOGRA-DB consists of radionuclides decay chart, distribution coefficients between solid and liquid, transfer factors from soil to plant, transfer coefficients from feed to beef and milk, concentration factors, and age dependent dose conversion factors for many radionuclides. Another additional code MOGRA-MAP can take in graphic map such as JPEG, TIFF, BITMAP, and GIF files, and calculate the square measure of the target land. (author)

  14. Experimental studies and computational benchmark on heavy liquid metal natural circulation in a full height-scale test loop for small modular reactors

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Yong-Hoon, E-mail: chaotics@snu.ac.kr [Department of Energy Systems Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826 (Korea, Republic of); Cho, Jaehyun [Korea Atomic Energy Research Institute, 111 Daedeok-daero, 989 Beon-gil, Yuseong-gu, Daejeon 34057 (Korea, Republic of); Lee, Jueun; Ju, Heejae; Sohn, Sungjune; Kim, Yeji; Noh, Hyunyub; Hwang, Il Soon [Department of Energy Systems Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826 (Korea, Republic of)

    2017-05-15

    Highlights: • Experimental studies on natural circulation for lead-bismuth eutectic were conducted. • Adiabatic wall boundaries conditions were established by compensating heat loss. • Computational benchmark with a system thermal-hydraulics code was performed. • Numerical simulation and experiment showed good agreement in mass flow rate. • An empirical relation was formulated for mass flow rate with experimental data. - Abstract: In order to test the enhanced safety of small lead-cooled fast reactors, lead-bismuth eutectic (LBE) natural circulation characteristics have been studied. We present results of experiments with LBE non-isothermal natural circulation in a full-height scale test loop, HELIOS (heavy eutectic liquid metal loop for integral test of operability and safety of PEACER), and the validation of a system thermal-hydraulics code. The experimental studies on LBE were conducted under steady state as a function of core power conditions from 9.8 kW to 33.6 kW. Local surface heaters on the main loop were activated and finely tuned by trial-and-error approach to make adiabatic wall boundary conditions. A thermal-hydraulic system code MARS-LBE was validated by using the well-defined benchmark data. It was found that the predictions were mostly in good agreement with the experimental data in terms of mass flow rate and temperature difference that were both within 7%, respectively. With experiment results, an empirical relation predicting mass flow rate at a non-isothermal, adiabatic condition in HELIOS was derived.

  15. Assessment of the GOTHIC code for prediction of hydrogen flame propagation in small scale experiments

    International Nuclear Information System (INIS)

    Lee, Jin-Yong . E-mail jinyong1@fnctech.com; Lee, Jung-Jae; Park, Goon-Cherl . E-mail parkgc@snu.ac.kr

    2006-01-01

    With the rising concerns regarding the time and space dependent hydrogen behavior in severe accidents, the calculation for local hydrogen combustion in compartment has been attempted using CFD codes like GOTHIC. In particular, the space resolved hydrogen combustion analysis is essential to address certain safety issues such as the safety components survivability, and to determine proper positions for hydrogen control devices as e.q. recombiners or igniters. In the GOTHIC 6.1b code, there are many advanced features associated with the hydrogen burn models to enhance its calculation capability. In this study, we performed premixed hydrogen/air combustion experiments with an upright, rectangular shaped, combustion chamber of dimensions 1 m x 0.024 m x 1 m. The GOTHIC 6.1b code was used to simulate the hydrogen/air combustion experiments, and its prediction capability was assessed by comparing the experimental with multidimensional calculational results. Especially, the prediction capability of the GOTHIC 6.1b code for local hydrogen flame propagation phenomena was examined. For some cases, comparisons are also presented for lumped modeling of hydrogen combustion. By evaluating the effect of parametric simulations, we present some instructions for local hydrogen combustion analysis using the GOTHIC 6.1b code. From the analyses results, it is concluded that the modeling parameter of GOTHIC 6.1b code should be modified when applying the mechanistic burn model for hydrogen propagation analysis in small geometry

  16. Analysis methodology for RBMK-1500 core safety and investigations on corium coolability during a LWR severe accident

    Energy Technology Data Exchange (ETDEWEB)

    Jasiulevicius, Audrius

    2003-07-01

    This thesis presents the work involving two broad aspects within the field of nuclear reactor analysis and safety. These are: - development of a fully independent reactor dynamics and safety analysis methodology of the RBMK-1500 core transient accidents and - experiments on the enhancement of coolability of a particulate bed or a melt pool due to heat removal through the control rod guide tubes. The first part of the thesis focuses on the development of the RBMK-1500 analysis methodology based on the CORETRAN code package. The second part investigates the issue of coolability during severe accidents in LWR type reactors: the coolability of debris bed and melt pool for in-vessel and ex-vessel conditions. The first chapter briefly presents the status of developments in both the RBMK-1500 core analysis and the corium coolability areas. The second chapter describes the generation of the RBMK-1500 neutron cross section data library with the HELIOS code. The cross section library was developed for the whole range of the reactor conditions. The results of the benchmarking with the WIMS-D4 code and validation against the RBMK Critical Facility experiments is also presented here. The HELIOS generated neutron cross section data library provides a close agreement with the WIMS-D4 code results. The validation against the data from the Critical Experiments shows that the HELIOS generated neutron cross section library provides excellent predictions for the criticality, axial and radial power distribution, control rod reactivity worths and coolant reactivity effects, etc. The reactivity effects of voiding for the system, fuel assembly and additional absorber channel are underpredicted in the calculations using the HELIOS code generated neutron cross sections. The underprediction, however, is much less than that obtained when the WIMS-D4 code generated cross sections are employed. The third chapter describes the work, performed towards the accurate prediction, assessment and

  17. Intra prediction using face continuity in 360-degree video coding

    Science.gov (United States)

    Hanhart, Philippe; He, Yuwen; Ye, Yan

    2017-09-01

    This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.

  18. Benchmarking and qualification of the NUFREQ-NPW code for best estimate prediction of multi-channel core stability margins

    International Nuclear Information System (INIS)

    Taleyarkhan, R.; Lahey, R.T. Jr.; McFarlane, A.F.; Podowski, M.Z.

    1988-01-01

    The NUFREQ-NPW code was modified and set up at Westinghouse, USA for mixed fuel type multi-channel core-wide stability analysis. The resulting code, NUFREQ-NPW, allows for variable axial power profiles between channel groups and can handle mixed fuel types. Various models incorporated into NUFREQ-NPW were systematically compared against the Westinghouse channel stability analysis code MAZDA-NF, for which the mathematical model was developed, in an entirely different manner. Excellent agreement was obtained which verified the thermal-hydraulic modeling and coding aspects. Detailed comparisons were also performed against nuclear-coupled reactor core stability data. All thirteen Peach Bottom-2 EOC-2/3 low flow stability tests were simulated. A key aspect for code qualification involved the development of a physically based empirical algorithm to correct for the effect of core inlet flow development on subcooled boiling. Various other modeling assumptions were tested and sensitivity studies performed. Good agreement was obtained between NUFREQ-NPW predictions and data. Moreover, predictions were generally on the conservative side. The results of detailed direct comparisons with experimental data using the NUFREQ-NPW code; have demonstrated that BWR core stability margins are conservatively predicted, and all data trends are captured with good accuracy. The methodology is thus suitable for BWR design and licensing purposes. 11 refs., 12 figs., 2 tabs

  19. Effects of particle size, helium gas pressure and microparticle dose on the plasma concentration of indomethacin after bombardment of indomethacin-loaded poly-L-lactic acid microspheres using a Helios gun system.

    Science.gov (United States)

    Uchida, Masaki; Natsume, Hideshi; Kobayashi, Daisuke; Sugibayashi, Kenji; Morimoto, Yasunori

    2002-05-01

    We investigated the effects of the particle size of indomethacin-loaded poly-L-lactic acid microspheres (IDM-loaded PLA MS), the helium pressure used to accelerate the particles, and the bombardment dose of PLA MS on the plasma concentration of IDM after bombarding with IDM-loaded PLA MS of different particle size ranges, 20-38, 44-53 and 75-100 microm, the abdomen of hairless rats using the Helios gene gun system (Helios gun system). Using larger particles and a higher helium pressure, produced an increase in the plasma IDM concentration and the area under the plasma concentration-time curve (AUC) and resultant F (relative bioavailability with respect to intracutaneous injection) of IDM increased by an amount depending on the particle size and helium pressure. Although a reduction in the bombardment dose led to a decrease in C(max) and AUC, F increased on decreasing the bombardment dose. In addition, a more efficient F was obtained after bombarding with IDM-loaded PLA MS of 75-100 microm in diameter at each low dose in different sites of the abdomen compared with that after bolus bombardment with a high dose (dose equivalent). These results suggest that the bombardment injection of drug-loaded microspheres by the Helios gun system is a very useful tool for delivering a variety of drugs in powder form into the skin and systemic circulation.

  20. Predictive coding accelerates word recognition and learning in the early stages of language development.

    Science.gov (United States)

    Ylinen, Sari; Bosseler, Alexis; Junttila, Katja; Huotilainen, Minna

    2017-11-01

    The ability to predict future events in the environment and learn from them is a fundamental component of adaptive behavior across species. Here we propose that inferring predictions facilitates speech processing and word learning in the early stages of language development. Twelve- and 24-month olds' electrophysiological brain responses to heard syllables are faster and more robust when the preceding word context predicts the ending of a familiar word. For unfamiliar, novel word forms, however, word-expectancy violation generates a prediction error response, the strength of which significantly correlates with children's vocabulary scores at 12 months. These results suggest that predictive coding may accelerate word recognition and support early learning of novel words, including not only the learning of heard word forms but also their mapping to meanings. Prediction error may mediate learning via attention, since infants' attention allocation to the entire learning situation in natural environments could account for the link between prediction error and the understanding of word meanings. On the whole, the present results on predictive coding support the view that principles of brain function reported across domains in humans and non-human animals apply to language and its development in the infant brain. A video abstract of this article can be viewed at: http://hy.fi/unitube/video/e1cbb495-41d8-462e-8660-0864a1abd02c. [Correction added on 27 January 2017, after first online publication: The video abstract link was added.]. © 2016 John Wiley & Sons Ltd.

  1. Study of a fuel assembly for the nuclear reactor of IV generation cooled with supercritical water

    International Nuclear Information System (INIS)

    Barragan M, A.; Martin del Campo M, C.; Francois L, J. L.; Espinosa P, G.

    2011-11-01

    In this work a neutron study is presented about a square assembly design of double line of fuel rods, with moderator box to the center of the arrangement, for a nuclear reactor cooled with supercritical water (SCWR). The SCWR reactor was chosen by the characteristics of its design, mainly because is based in light water reactors (PWR and BWR), and the operational experience that has of them allow to use models and similar programs to simulate the fuel and the nucleus of this type of reactors. To develop the necessary models and to carry out the design and analysis of the SCWR reactor, the neutron codes MCNPX and Helios were used. The reason of using both codes, is because the code MCNPX used thoroughly in the neutron simulation of these reactors, it has been our reference code to analyze the results obtained with the Helios code which results are more efficient because its calculation times are minors. In the nucleus design the same parameters for both codes were considered. The results show that the design with Helios is a viable option to simulate these reactors since their values of the neutrons multiplication factor are very similar to those obtained with MCNPX. On the other hand, it could be corroborated that the CASMO-4 code is inadequate to simulate the fuel to the temperature conditions and water pressure in the SCWR. (Author)

  2. Study of a fuel assembly for the nuclear reactor of IV generation cooled with supercritical water; Estudio de un ensamble de combustible para el reactor nuclear de generacion IV enfriado con agua supercritica

    Energy Technology Data Exchange (ETDEWEB)

    Barragan M, A.; Martin del Campo M, C.; Francois L, J. L. [UNAM, Facultad de Ingenieria, Departamento de Sistemas Energeticos, Ciudad Universitaria, 04510 Mexico D. F. (Mexico); Espinosa P, G., E-mail: albrm29@yahoo.com [Universidad Autonoma Metropolitana, Unidad Iztapalapa, Area de Ingenieria en Recursos Energeticos, Av. San Rafael Atlixco 186, Col. Vicentina, 09340 Mexico D. F. (MX)

    2011-11-15

    In this work a neutron study is presented about a square assembly design of double line of fuel rods, with moderator box to the center of the arrangement, for a nuclear reactor cooled with supercritical water (SCWR). The SCWR reactor was chosen by the characteristics of its design, mainly because is based in light water reactors (PWR and BWR), and the operational experience that has of them allow to use models and similar programs to simulate the fuel and the nucleus of this type of reactors. To develop the necessary models and to carry out the design and analysis of the SCWR reactor, the neutron codes MCNPX and Helios were used. The reason of using both codes, is because the code MCNPX used thoroughly in the neutron simulation of these reactors, it has been our reference code to analyze the results obtained with the Helios code which results are more efficient because its calculation times are minors. In the nucleus design the same parameters for both codes were considered. The results show that the design with Helios is a viable option to simulate these reactors since their values of the neutrons multiplication factor are very similar to those obtained with MCNPX. On the other hand, it could be corroborated that the CASMO-4 code is inadequate to simulate the fuel to the temperature conditions and water pressure in the SCWR. (Author)

  3. Magnified Neural Envelope Coding Predicts Deficits in Speech Perception in Noise.

    Science.gov (United States)

    Millman, Rebecca E; Mattys, Sven L; Gouws, André D; Prendergast, Garreth

    2017-08-09

    Verbal communication in noisy backgrounds is challenging. Understanding speech in background noise that fluctuates in intensity over time is particularly difficult for hearing-impaired listeners with a sensorineural hearing loss (SNHL). The reduction in fast-acting cochlear compression associated with SNHL exaggerates the perceived fluctuations in intensity in amplitude-modulated sounds. SNHL-induced changes in the coding of amplitude-modulated sounds may have a detrimental effect on the ability of SNHL listeners to understand speech in the presence of modulated background noise. To date, direct evidence for a link between magnified envelope coding and deficits in speech identification in modulated noise has been absent. Here, magnetoencephalography was used to quantify the effects of SNHL on phase locking to the temporal envelope of modulated noise (envelope coding) in human auditory cortex. Our results show that SNHL enhances the amplitude of envelope coding in posteromedial auditory cortex, whereas it enhances the fidelity of envelope coding in posteromedial and posterolateral auditory cortex. This dissociation was more evident in the right hemisphere, demonstrating functional lateralization in enhanced envelope coding in SNHL listeners. However, enhanced envelope coding was not perceptually beneficial. Our results also show that both hearing thresholds and, to a lesser extent, magnified cortical envelope coding in left posteromedial auditory cortex predict speech identification in modulated background noise. We propose a framework in which magnified envelope coding in posteromedial auditory cortex disrupts the segregation of speech from background noise, leading to deficits in speech perception in modulated background noise. SIGNIFICANCE STATEMENT People with hearing loss struggle to follow conversations in noisy environments. Background noise that fluctuates in intensity over time poses a particular challenge. Using magnetoencephalography, we demonstrate

  4. Assessment of predictive capability of REFLA/TRAC code for large break LOCA transient in PWR using LOFT L2-5 test data

    International Nuclear Information System (INIS)

    Akimoto, Hajime; Ohnuki, Akira; Murao, Yoshio

    1994-03-01

    The REFLA/TRAC code is a best estimate code developed at Japan Atomic Energy Research Institute (JAERI) to provide advanced predictions of thermal hydraulic transient in light water reactors (LWRs). The REFLA/TRAC code uses the TRAC-PF1/MOD1 code as the framework of the code. The REFLA/TRAC code is expected to be used for the calibration of licensing codes, accident analysis, accident simulation of LWRs, and design of advanced LWRs. Several models have been implemented to the TRAC-PF1/MOD1 code at JAERI including reflood model, condensation model, interfacial and wall friction models, etc. These models have been verified using data from various separate effect tests. This report describes an assessment result of the REFLA/TRAC code, which was performed to assess the predictive capability for integral system behavior under large break loss of coolant accident (LBLOCA) using data from the LOFT L2-5 test. The assessment calculation confirmed that the REFLA/TRAC code can predict break mass flow rate, emergency core cooling water bypass and clad temperature excellently in the LOFT L2-5 test. The CPU time of the REFLA/TRAC code was about 1/3 of the TRAC-PF1/MOD1 code. The REFLA/TRAC code can perform stable and fast simulation of thermal hydraulic behavior in PWR LBLOCA with enough accuracy for practical use. (author)

  5. Advanced methodology to simulate boiling water reactor transient using coupled thermal-hydraulic/neutron-kinetic codes

    Energy Technology Data Exchange (ETDEWEB)

    Hartmann, Christoph Oliver

    2016-06-13

    Coupled Thermal-hydraulic/Neutron-kinetic (TH/NK) simulations of Boiling Water Reactor transients require well validated and accurate simulation tools. The generation of cross-section (XS) libraries, depending on the individual thermal-hydraulic state parameters, is of paramount importance for coupled simulations. Problem-dependent XS-sets for 3D core simulations are being generated mainly by well validated, fast running commercial and user-friendly lattice codes such as CASMO and HELIOS. In this dissertation a computational route, based on the lattice code SCALE6/TRITON, the cross-section interface GenPMAXS, the best-estimate thermal-hydraulic system code TRACE and the core simulator PARCS, for best-estimate simulations of Boiling Water (BWR) transients has been developed and validated. The computational route has been supplemented by a subsequent uncertainty and sensitivity study based on Monte Carlo sampling and propagation of the uncertainties of input parameters to the output (SUSA code). The analysis of a single BWR fuel assembly depletion problem with PARCS using SCALE/TRITON cross-sections has been shown a good agreement with the results obtained with CASMO cross-section sets. However, to compensate the deficiencies of the interface program GenPMAXS, PYTHON scripts had to be developed to incorporate missing data, as the yields of Iodine, Xenon and Promethium, into the cross-section-data sets (PMAXS-format) generated by GenPMAXS from the SCALE/TRITON output. The results of the depletion analysis of a full BWR core with PARCS have indicated the importance of considering history effects, adequate modeling of the reflector region and the control rods, as the PARCS simulations for depleted fuel and all control rods inserted (ARI) differs significantly at the fuel assembly top and bottom. Systematic investigations with the coupled codes TRACE/PARCS have been performed to analyse the core behaviour at different thermal conditions using nuclear data (XS

  6. Local power peaking factor estimation in nuclear fuel by artificial neural networks

    International Nuclear Information System (INIS)

    Montes, Jose Luis; Francois, Juan Luis; Ortiz, Juan Jose; Martin-del-Campo, Cecilia; Perusquia, Raul

    2009-01-01

    This paper presents the training of an artificial neural network (ANN) to accurately predict, in very short time, a physical parameter used in nuclear fuel reactor optimization: the local power peaking factor (LPPF) in a typical boiling water reactor (BWR) fuel lattice. The ANN training patterns are distribution of fissile and burnable poison materials in the fuel lattice and their associated LPPF. These data were obtained by modeling the fuel lattices with a neutronic simulator: the HELIOS transport code. The combination of the pin U 235 enrichment and the Gd 2 O 3 (gadolinia) concentration, inside the 10 x 10 fuel lattice array, was encoded by three different methods. However, the only encoding method that was able to give a good prediction of the LPPF was the method which added the U 235 enrichment and the gadolinia concentration. The results show that the relative error in the estimation of the LPPF, obtained by the trained ANN, ranged from 0.022% to 0.045%, with respect to the HELIOS results

  7. Modified linear predictive coding approach for moving target tracking by Doppler radar

    Science.gov (United States)

    Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao

    2016-07-01

    Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.

  8. A study on the prediction capability of GOTHIC and HYCA3D code for local hydrogen concentrations

    International Nuclear Information System (INIS)

    Choi, Y. S.; Lee, W. J.; Lee, J. J.; Park, K. C.

    2002-01-01

    In this study the prediction capability of GOTHIC and HYCA3D code for local hydrogen concentrations was verified with experimental results. Among the experiments, executed by SNU and other organization inside and outside of the country, the fast transient and the obstacle cases are selected. In case of large subcompartment both the code show good agreement with the experimental data. But in case of small and complex geometry or fast transient the results of GOTHIC code have the large difference from experimental ones. This represents that GOTHIC code is unsuitable for these cases. On the contrary HTCA3D code agrees well with all the experimental data

  9. Predictive coding of visual object position ahead of moving objects revealed by time-resolved EEG decoding.

    Science.gov (United States)

    Hogendoorn, Hinze; Burkitt, Anthony N

    2018-05-01

    Due to the delays inherent in neuronal transmission, our awareness of sensory events necessarily lags behind the occurrence of those events in the world. If the visual system did not compensate for these delays, we would consistently mislocalize moving objects behind their actual position. Anticipatory mechanisms that might compensate for these delays have been reported in animals, and such mechanisms have also been hypothesized to underlie perceptual effects in humans such as the Flash-Lag Effect. However, to date no direct physiological evidence for anticipatory mechanisms has been found in humans. Here, we apply multivariate pattern classification to time-resolved EEG data to investigate anticipatory coding of object position in humans. By comparing the time-course of neural position representation for objects in both random and predictable apparent motion, we isolated anticipatory mechanisms that could compensate for neural delays when motion trajectories were predictable. As well as revealing an early neural position representation (lag 80-90 ms) that was unaffected by the predictability of the object's trajectory, we demonstrate a second neural position representation at 140-150 ms that was distinct from the first, and that was pre-activated ahead of the moving object when it moved on a predictable trajectory. The latency advantage for predictable motion was approximately 16 ± 2 ms. To our knowledge, this provides the first direct experimental neurophysiological evidence of anticipatory coding in human vision, revealing the time-course of predictive mechanisms without using a spatial proxy for time. The results are numerically consistent with earlier animal work, and suggest that current models of spatial predictive coding in visual cortex can be effectively extended into the temporal domain. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Prediction of surface cracks from thick-walled pressurized vessels with ASME code

    International Nuclear Information System (INIS)

    Thieme, W.

    1983-01-01

    The ASME-Code, Section XI, Appendix A 'Analysis of flow indications' is still non-mandatory for the pressure components of nuclear power plants. It is certainly difficult to take realistic account of the many factors influencing crack propagation while making life predictions. The accuracy of the US guideline is analysed, and its possible applications are roughly outlined. (orig./IHOE) [de

  11. Magnetic fields and flows between 1 and 0.3 AU during the primary mission of Helios 1

    International Nuclear Information System (INIS)

    Burlaga, L.F.; Ness, N.F.; Mariani, F.; Bavassano, B.; Villante, U.; Rosenbauer, H.; Schwenn, R.; Harvey, J.

    1978-01-01

    Helios 1 moved from 1 AU on December 10, 1974, to 0.31 AU on March 15, 1975, and the sun rotated beneath the spacecraft nearly 4 times during the interval. Recurrent high-speed streams with uniform magnetic polarity were observed, and they were associated with coronal holes of the same polarity. Although they were recurrent, the streams and their magnetic field patterns were not stationary, because the coronal holes which produced them changed in shape and latitude from one rotation to the next. We estimated that the magnetic field intensity of open field lines in some of these holes was of the order 10--20 G. Recurrent slow flows were also observed. The magnetic field polarity and intensity in these flows were irregular, and they changed from one rotation to the next. Cold magnetic enhancements (CME's) characterized by a twofold to threefold enhancement of magnetic field intensity and a fivefold to sevenfold depression of proton temperature relative to conditions ahead of the CME's were observed in some slow flows. Some of these CME's were contiguous with interaction regions of streams. At perihelion, Helios observed a recurrent stream which was associated with a lobe of the south polar coronal hole. The longitudinal width of the stream was three times that of the hole. We estimate that the width of the eastern and western boundaries of the streams at the coronal holes was only 2.5 0 +- 1.5 0 , and we infer that the width at the northern boundary of the stream was 0 . We conclude that between the sun and 0.3 AU there was a diverging stream surrounded by a thin boundary layer in which there was a large velocity shear. There is evidence for compression of the magnetic field in the western boundary layer (interaction region), presumably due to steepening of the stream within 0.31 AU

  12. Validation of the ORIGEN-S code for predicting radionuclide inventories in used CANDU fuel

    International Nuclear Information System (INIS)

    Tait, J.C.; Gauld, I.; Kerr, A.H.

    1995-01-01

    The safety assessment being conducted by AECL Research for the concept of deep geological disposal of used CANDU UO 2 fuel requires the calculation of radionuclide inventories in the fuel to provide source terms for radionuclide release. This report discusses the validation of selected actinide and fission-product inventories calculated using the ORIGEN-S code coupled with the WIMS-AECL lattice code, using data from analytical measurements of radioisotope inventories in Pickering CANDU reactor fuel. The recent processing of new ENDF/B-VI cross-section data has allowed the ORIGEN-S calculations to be performed using the most up-to-date nuclear data available. The results indicate that the code is reliably predicting actinide and the majority of fission-product inventories to within the analytical uncertainty. ((orig.))

  13. BOW. A computer code to predict lateral deflections of composite beams. A computer code to predict lateral deflections of composite beams

    Energy Technology Data Exchange (ETDEWEB)

    Tayal, M.

    1987-08-15

    Arrays of tubes are used in many engineered structures, such as in nuclear fuel bundles and in steam generators. The tubes can bend (bow) due to in-service temperatures and loads. Assessments of bowing of nuclear fuel elements can help demonstrate the integrity of fuel and of surrounding components, as a function of operating conditions such as channel power. The BOW code calculates the bending of composite beams such as fuel elements, due to gradients of temperature and due to hydraulic forces. The deflections and rotations are calculated in both lateral directions, for given conditions of temperatures. Wet and dry operation of the sheath can be simulated. Bow accounts for the following physical phenomena: circumferential and axial variations in the temperatures of the sheath and of the pellet; cracking of pellets; grip and slip between the pellets and the sheath; hydraulic drag; restraints from endplates, from neighbouring elements, and from the pressure-tube; gravity; concentric or eccentric welds between endcap and endplate; neutron flux gradients; and variations of material properties with temperature. The code is based on fundamental principles of mechanics. The governing equations are solved numerically using the finite element method. Several comparisons with closed-form equations show that the solutions of BOW are accurate. BOW`s predictions for initial in-reactor bow are also consistent with two post-irradiation measurements.

  14. Multi codes and multi-scale analysis for void fraction prediction in hot channel for VVER-1000/V392

    International Nuclear Information System (INIS)

    Hoang Minh Giang; Hoang Tan Hung; Nguyen Huu Tiep

    2015-01-01

    Recently, an approach of multi codes and multi-scale analysis is widely applied to study core thermal hydraulic behavior such as void fraction prediction. Better results are achieved by using multi codes or coupling codes such as PARCS and RELAP5. The advantage of multi-scale analysis is zooming of the interested part in the simulated domain for detail investigation. Therefore, in this study, the multi codes between MCNP5, RELAP5, CTF and also the multi-scale analysis based RELAP5 and CTF are applied to investigate void fraction in hot channel of VVER-1000/V392 reactor. Since VVER-1000/V392 reactor is a typical advanced reactor that can be considered as the base to develop later VVER-1200 reactor, then understanding core behavior in transient conditions is necessary in order to investigate VVER technology. It is shown that the item of near wall boiling, Γ w in RELAP5 proposed by Lahey mechanistic method may not give enough accuracy of void fraction prediction as smaller scale code as CTF. (author)

  15. Assessment of Prediction Capabilities of COCOSYS and CFX Code for Simplified Containment

    Directory of Open Access Journals (Sweden)

    Jia Zhu

    2016-01-01

    Full Text Available The acceptable accuracy for simulation of severe accident scenarios in containments of nuclear power plants is required to investigate the consequences of severe accidents and effectiveness of potential counter measures. For this purpose, the actual capability of CFX tool and COCOSYS code is assessed in prototypical geometries for simplified physical process-plume (due to a heat source under adiabatic and convection boundary condition, respectively. Results of the comparison under adiabatic boundary condition show that good agreement is obtained among the analytical solution, COCOSYS prediction, and CFX prediction for zone temperature. The general trend of the temperature distribution along the vertical direction predicted by COCOSYS agrees with the CFX prediction except in dome, and this phenomenon is predicted well by CFX and failed to be reproduced by COCOSYS. Both COCOSYS and CFX indicate that there is no temperature stratification inside dome. CFX prediction shows that temperature stratification area occurs beneath the dome and away from the heat source. Temperature stratification area under adiabatic boundary condition is bigger than that under convection boundary condition. The results indicate that the average temperature inside containment predicted with COCOSYS model is overestimated under adiabatic boundary condition, while it is underestimated under convection boundary condition compared to CFX prediction.

  16. Validation of the ORIGEN-S code for predicting radionuclide inventories in used CANDU Fuel

    International Nuclear Information System (INIS)

    Tait, J.C.; Gauld, I.; Kerr, A.H.

    1994-10-01

    The safety assessment being conducted by AECL Research for the concept of deep geological disposal of used CANDU UO 2 fuel requires the calculation of radionuclide inventories in the fuel to provide source terms for radionuclide release. This report discusses the validation of selected actinide and fission-product inventories calculated using the ORIGEN-S code coupled with the WIMS-AECL lattice code, using data from analytical measurements of radioisotope inventories in Pickering CANDU reactor fuel. The recent processing of new ENDF/B-VI cross-section data has allowed the ORIGEN-S calculations to be performed using the most up-to-date nuclear data available. The results indicate that the code is reliably predicting actinide and the majority of fission-product inventories to within the analytical uncertainty. 38 refs., 4 figs., 5 tabs

  17. Pin cell discontinuity factors in the transient 3-D discrete ordinates code TORT-TD - 237

    International Nuclear Information System (INIS)

    Seubert, A.

    2010-01-01

    This paper describes the application of generalized equivalence theory to the time-dependent 3-D discrete ordinates neutron transport code TORT-TD. The introduction of pin cell discontinuity factors into the discrete ordinates transport equation is described by assuming a linear dependence of the homogenized neutron angular flux within a pin cell which may be discontinuous at the interfaces to adjacent cells. The homogenized flux discontinuity at cell interfaces is expressed by pin cell discontinuity factors which in turn are determined from fuel assembly lattice calculations using HELIOS. Application of TORT-TD to the all rods in state of the PWR MOX/UO 2 Core Transient Benchmark with pin cell homogenized nuclear cross sections demonstrate the potential of pin cell discontinuity factors to reduce pin cell homogenization errors. (authors)

  18. Transitioning from interpretive to predictive in thermal hydraulic codes

    International Nuclear Information System (INIS)

    Mousseau, V.A.

    2004-01-01

    The current thermal hydraulic codes in use in the US, RELAP and TRAC, where originally written in the mid to late 1970's. At that time computers were slow, expensive, and had small memories. Because of these constraints, sacrifices had to be made, both in physics and numerical methods, which resulted in limitations on the accuracy of the solutions. Significant changes have occurred that induce very different requirements for the thermal hydraulic codes to be used for the future GEN-IV nuclear reactors. First, computers speed and memory grow at an exponential rate while the costs hold constant or decrease. Second, passive safety systems in modern designs stretch the length of relevant transients to many days. Finally, costs of experiments have grown very rapidly. Because of these new constraints, modern thermal hydraulic codes will be relied on for a significantly larger portion of bringing a nuclear reactor on line. Simulation codes will have to define in which part of state space experiments will be run. They will then have to be able to extend the small number of experiments to cover the large state space in which the reactors will operate. This data extrapolation mode will be referred to as 'predictive'. One of the keys to analyzing the accuracy of a simulation is to consider the entire domain being simulated. For example, in a reactor design where the containment is coupled to the reactor cooling system through radiative heat transfer, the accuracy of a transient includes the containment, the radiation heat transfer, the fluid flow in the cooling system, the thermal conduction in the solid, and the neutron transport in the reactor. All of this physics is coupled together in one nonlinear system through material properties, cross sections, heat transfer coefficients, and other mechanisms that exchange mass, momentum, and energy. Traditionally, these different physical domains, (containment, cooling system, nuclear fuel, etc.) have been solved in different

  19. Prediction of the HBS width and Xe concentration in grain matrix by the INFRA code

    International Nuclear Information System (INIS)

    Yang, Yong Sik; Lee, Chan Bok; Kim, Dae Ho; Kim, Young Min

    2004-01-01

    Formation of a HBS(High Burnup Structure) is an important phenomenon for the high burnup fuel performance and safety. For the prediction of the HBS(so called 'rim microstructure') proposed rim microstructure formation model, which is a function of the fuel temperature, grain size and fission rate, was inserted into the high burnup fuel performance code INFRA. During the past decades, various examinations have been performed to find the HBS formation mechanism and define HBS characteristics. In the HBEP(High Burnup Effects Program), several rods were examined by EPMA analysis to measure HBS width and these results were re-measured by improved technology including XRF and detail microstructure examination. Recently, very high burnup(∼100MWd/kgU) fuel examination results were reported by Manzel et al., and EPMA analysis results have been released. Using the measured EPMA analysis data, HBS formation prediction model of INFRA code are verified. HBS width prediction results are compared with measured ones and Xe concentration profile is compared with measured EPMA data. Calculated HBS width shows good agreement with measured data in a reasonable error range. Though, there are some difference in transition region and central region due to model limitation and fission gas release prediction error respectively, however, predicted Xe concentration in the fully developed HBS region shows a good agreement with the measured data. (Author)

  20. IAMBUS, a computer code for the design and performance prediction of fast breeder fuel rods

    International Nuclear Information System (INIS)

    Toebbe, H.

    1990-05-01

    IAMBUS is a computer code for the thermal and mechanical design, in-pile performance prediction and post-irradiation analysis of fast breeder fuel rods. The code deals with steady, non-steady and transient operating conditions and enables to predict in-pile behavior of fuel rods in power reactors as well as in experimental rigs. Great effort went into the development of a realistic account of non-steady fuel rod operating conditions. The main emphasis is placed on characterizing the mechanical interaction taking place between the cladding tube and the fuel as a result of contact pressure and friction forces, with due consideration of axial and radial crack configuration within the fuel as well as the gradual transition at the elastic/plastic interface in respect to fuel behavior. IAMBUS can be readily adapted to various fuel and cladding materials. The specific models and material correlations of the reference version deal with the actual in-pile behavior and physical properties of the KNK II and SNR 300 related fuel rod design, confirmed by comparison of the fuel performance model with post-irradiation data. The comparison comprises steady, non-steady and transient irradiation experiments within the German/Belgian fuel rod irradiation program. The code is further validated by comparison of model predictions with post-irradiation data of standard fuel and breeder rods of Phenix and PFR as well as selected LWR fuel rods in non-steady operating conditions

  1. Predictions of Critical Heat Flux Using the ASSERT-PV Subchannel Code for a CANFLEX Variant Bundle

    International Nuclear Information System (INIS)

    Onder, Ebru Nihan; Leung, Laurence; Kim, Hung; Rao, Yanfei

    2009-01-01

    The ASSERT-PV subchannel code developed by AECL has been applied as a design-assist tool to the advanced CANDU 1 reactor fuel bundle. Based primarily on the CANFLEX 2 fuel bundle, several geometry changes (such as element sizes and pitchcircle diameters of various element rings) were examined to optimize the dryout power and pressure-drop performances of the new fuel bundle. An experiment was performed to obtain dryout power measurements for verification of the ASSERT-PV code predictions. It was carried out using an electrically heated, Refrigerant-134a cooled, fuel bundle string simulator. The axial power profile of the simulator was uniform, while the radial power profile of the element rings was varied simulating profiles in bundles with various fuel compositions and burn-ups. Dryout power measurements are predicted closely using the ASSERT-PV code, particularly at low flows and low pressures, but are overpredicted at high flows and high pressures. The majority of data shows that dryout powers are underpredicted at low inlet-fluid temperatures but overpredicted at high inlet-fluid temperatures

  2. Analysis methodology for RBMK-1500 core safety and investigations on corium coolability during a LWR severe accident

    International Nuclear Information System (INIS)

    Jasiulevicius, Audrius

    2003-01-01

    This thesis presents the work involving two broad aspects within the field of nuclear reactor analysis and safety. These are: - development of a fully independent reactor dynamics and safety analysis methodology of the RBMK-1500 core transient accidents and - experiments on the enhancement of coolability of a particulate bed or a melt pool due to heat removal through the control rod guide tubes. The first part of the thesis focuses on the development of the RBMK-1500 analysis methodology based on the CORETRAN code package. The second part investigates the issue of coolability during severe accidents in LWR type reactors: the coolability of debris bed and melt pool for in-vessel and ex-vessel conditions. The first chapter briefly presents the status of developments in both the RBMK-1500 core analysis and the corium coolability areas. The second chapter describes the generation of the RBMK-1500 neutron cross section data library with the HELIOS code. The cross section library was developed for the whole range of the reactor conditions. The results of the benchmarking with the WIMS-D4 code and validation against the RBMK Critical Facility experiments is also presented here. The HELIOS generated neutron cross section data library provides a close agreement with the WIMS-D4 code results. The validation against the data from the Critical Experiments shows that the HELIOS generated neutron cross section library provides excellent predictions for the criticality, axial and radial power distribution, control rod reactivity worths and coolant reactivity effects, etc. The reactivity effects of voiding for the system, fuel assembly and additional absorber channel are underpredicted in the calculations using the HELIOS code generated neutron cross sections. The underprediction, however, is much less than that obtained when the WIMS-D4 code generated cross sections are employed. The third chapter describes the work, performed towards the accurate prediction, assessment and

  3. Rod behaviour under base load, load follow and frequency control operation: CYRANO 2 code predictions versus experimental results

    International Nuclear Information System (INIS)

    Gautier, B.; Raybaud, A.

    1984-01-01

    The French PWR reactors are now currently operating under load follow and frequency control. In order to demonstrate that these operating conditions were not able to increase the fuel failure rate, fuel rod behaviour calculations have been performed by E.D.F. with CYRANO 2 code. In parallel with these theoretical calculations, code predictions have been compared to experimental results. The paper presents some of the comparisons performed on 17x17 fuel irradiated in FESSENHEIM 2 up to 30 GWd/tU under base load operation and in the CAP reactor under load follow and frequency control conditions. It is shown that experimental results can be predicted with a reasonable accuracy by CYRANO 2 code. The experimental work was carried out under joint R and D programs by EDF, FRAGEMA, CEA, and WESTINGHOUSE (CAP program by French partners only). (author)

  4. Use of a commercial heat transfer code to predict horizontally oriented spent fuel rod temperatures

    International Nuclear Information System (INIS)

    Wix, S.D.; Koski, J.A.

    1992-01-01

    Radioactive spent fuel assemblies are a source of hazardous waste that will have to be dealt with in the near future. It is anticipated that the spent fuel assemblies will be transported to disposal sites in spent fuel transportation casks. In order to design a reliable and safe transportation cask, the maximum cladding temperature of the spent fuel rod arrays must be calculated. The maximum rod temperature is a limiting factor in the amount of spent fuel that can be loaded in a transportation cask. The scope of this work is to demonstrate that reasonable and conservative spent fuel rod temperature predictions can be made using commercially available thermal analysis codes. The demonstration is accomplished by a comparison between numerical temperature predictions, with a commercially available thermal analysis code, and experimental temperature data for electrical rod heaters simulating a horizontally oriented spent fuel rod bundle

  5. An integrative approach to predicting the functional effects of small indels in non-coding regions of the human genome.

    Science.gov (United States)

    Ferlaino, Michael; Rogers, Mark F; Shihab, Hashem A; Mort, Matthew; Cooper, David N; Gaunt, Tom R; Campbell, Colin

    2017-10-06

    Small insertions and deletions (indels) have a significant influence in human disease and, in terms of frequency, they are second only to single nucleotide variants as pathogenic mutations. As the majority of mutations associated with complex traits are located outside the exome, it is crucial to investigate the potential pathogenic impact of indels in non-coding regions of the human genome. We present FATHMM-indel, an integrative approach to predict the functional effect, pathogenic or neutral, of indels in non-coding regions of the human genome. Our method exploits various genomic annotations in addition to sequence data. When validated on benchmark data, FATHMM-indel significantly outperforms CADD and GAVIN, state of the art models in assessing the pathogenic impact of non-coding variants. FATHMM-indel is available via a web server at indels.biocompute.org.uk. FATHMM-indel can accurately predict the functional impact and prioritise small indels throughout the whole non-coding genome.

  6. Occupation and skin cancer: the results of the HELIOS-I multicenter case-control study

    Directory of Open Access Journals (Sweden)

    Gafà Lorenzo

    2007-07-01

    Full Text Available Abstract Background Non-melanoma skin cancer (NMSC is the most frequent tumour among Caucasian populations worldwide. Among the risk factors associated with this tumour, there are host-related factors and several environmental agents. A greater likelihood of high exposure to physical agents (with the exception of solar radiation and chemical agents depends on the work setting. Our objective is to evaluate the role of occupational exposures in NMSC, with special emphasis on risk factors other than solar radiation and skin type. Methods We analysed 1585 cases (1333 basal cell carcinoma (BCC and 183 squamous cell carcinoma (SCC and 1507 controls drawn from the Helios-I multicenter study. Odds ratios (OR and 95% confidence intervals (CI were estimated using logistic regression mixed models. Results For NMSC as a whole (both histological types, miners and quarrymen, secondary education teachers, and masons registered excess risk, regardless of exposure to solar radiation and skin type (OR 7.04, 95% CI 2.44–20.31; OR 1.75, 95% CI 1.05–2.89 and OR 1.54, 95% CI 1.04–2.27, respectively. Frequency of BCC proved higher among railway engine drivers and firemen (OR 4.55; 95% CI 0.96–21.57, specialised farmers (OR 1.65; 95% CI 1.05–2.59 and salesmen (OR 3.02; 95% CI 1.05–2.86, in addition to miners and quarrymen and secondary education teachers (OR 7.96; 95% CI 2.72–23.23 and OR 1.76; 95% CI 1.05–2.94 respectively. The occupations that registered a higher risk of SCC (though not of BCC were those involving direct contact with livestock, construction workers not elsewhere classified (OR 2.95, 95% CI 1.12–7.74, stationary engine and related equipment operators not elsewhere classified (OR 5.31, 95% CI 1.13–21.04 and masons (OR 2.55, 95% CI 1.36–4.78. Conclusion Exposure to hazardous air pollutants, arsenic, ionizing radiations and burns may explain a good part of the associations observed in this study. The Helios study affords an

  7. Dopamine reward prediction error coding

    OpenAIRE

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards?an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less...

  8. Positive Predictive Values of International Classification of Diseases, 10th Revision Coding Algorithms to Identify Patients With Autosomal Dominant Polycystic Kidney Disease

    Directory of Open Access Journals (Sweden)

    Vinusha Kalatharan

    2016-12-01

    Full Text Available Background: International Classification of Diseases, 10th Revision codes (ICD-10 for autosomal dominant polycystic kidney disease (ADPKD is used within several administrative health care databases. It is unknown whether these codes identify patients who meet strict clinical criteria for ADPKD. Objective: The objective of this study is (1 to determine whether different ICD-10 coding algorithms identify adult patients who meet strict clinical criteria for ADPKD as assessed through medical chart review and (2 to assess the number of patients identified with different ADPKD coding algorithms in Ontario. Design: Validation study of health care database codes, and prevalence. Setting: Ontario, Canada. Patients: For the chart review, 201 adult patients with hospital encounters between April 1, 2002, and March 31, 2014, assigned either ICD-10 codes Q61.2 or Q61.3. Measurements: This study measured positive predictive value of the ICD-10 coding algorithms and the number of Ontarians identified with different coding algorithms. Methods: We manually reviewed a random sample of medical charts in London, Ontario, Canada, and determined whether or not ADPKD was present according to strict clinical criteria. Results: The presence of either ICD-10 code Q61.2 or Q61.3 in a hospital encounter had a positive predictive value of 85% (95% confidence interval [CI], 79%-89% and identified 2981 Ontarians (0.02% of the Ontario adult population. The presence of ICD-10 code Q61.2 in a hospital encounter had a positive predictive value of 97% (95% CI, 86%-100% and identified 394 adults in Ontario (0.003% of the Ontario adult population. Limitations: (1 We could not calculate other measures of validity; (2 the coding algorithms do not identify patients without hospital encounters; and (3 coding practices may differ between hospitals. Conclusions: Most patients with ICD-10 code Q61.2 or Q61.3 assigned during their hospital encounters have ADPKD according to the clinical

  9. From structure prediction to genomic screens for novel non-coding RNAs

    DEFF Research Database (Denmark)

    Gorodkin, Jan; Hofacker, Ivo L.

    2011-01-01

    Abstract: Non-coding RNAs (ncRNAs) are receiving more and more attention not only as an abundant class of genes, but also as regulatory structural elements (some located in mRNAs). A key feature of RNA function is its structure. Computational methods were developed early for folding and prediction....... This and the increased amount of available genomes have made it possible to employ structure-based methods for genomic screens. The field has moved from folding prediction of single sequences to computational screens for ncRNAs in genomic sequence using the RNA structure as the main characteristic feature. Whereas early...... upon some of the concepts in current methods that have been applied in genomic screens for de novo RNA structures in searches for novel ncRNA genes and regulatory RNA structure on mRNAs. We discuss the strengths and weaknesses of the different strategies and how they can complement each other....

  10. Investigation to determine the absolute sensitivity of Rh SPNDs

    International Nuclear Information System (INIS)

    Adorian, F.; Patai Szabo, S.; Pos, I.

    1998-01-01

    The goal of the work was to find an empirical sensitivity function of the Rh SPNDs used in VVER-440 reactors and to investigate the accuracy and adequateness of the detector signal predicting capability of the associated model. In our case the model was based on the HELIOS transport code and the C-PORCA nodal code. A statistical sensitivity analysis versus some selected parameters (e.g. enrichment, burn-up) has been carried out by using a substantial amount of measured data. We also investigated the stability of the electron collecting probability of the detectors versus their burn-up and other parameters with the aim of obtaining a tuned semi-empirical formula for the detector burnup correction. (Authors)

  11. Predicting holland occupational codes by means of paq job dimension scores

    Directory of Open Access Journals (Sweden)

    R. P. Van Der Merwe

    1990-06-01

    Full Text Available A study was conducted on how to obtain Holland's codes for South African occupations practically and economically by deducing them from information on the nature of the occupation (as derived by means of the Position Analysis Questionnaire. A discriminant analysis revealed that on the basis of the PAQ information the occupations could be distinguished clearly according to the main orientations of their American codes. Regression equations were also developed to predict the mean Self-Directed Search scores of the occupations on the basis of their PAQ information. Opsomming Ondersoek is ingestel om Holland se kodes vir Suid- Afrikaanse beroepe op 'n praktiese en ekonomiese wyse te bekom deur hulle van inligting oor die aard van die beroep (soos verkry met behulp van die Position Analysis Questionnaire af te lei. 'n Diskriminantontleding het getoon dat die beroepe op grond van die PAQ-inligting duidelik volgens die hoofberoepsgroepe van hulle Amerikaanse kodes onderskei kan word. Verder is regressievergelykings ontwikkel om beroepe se gemiddelde Self-Directed Search-tellings op grond van hulle PAQ-inligting te voorspel.

  12. Aztheca Code

    International Nuclear Information System (INIS)

    Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.

    2017-09-01

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  13. Evaluation of CRUDTRAN code to predict transport of corrosion products and radioactivity in the PWR primary coolant system

    International Nuclear Information System (INIS)

    Lee, C.B.

    2002-01-01

    CRUDTRAN code is to predict transport of the corrosion products and their radio-activated nuclides such as cobalt-58 and cobalt-60 in the PWR primary coolant system. In CRUDTRAN code the PWR primary circuit is divided into three principal sections such as the core, the coolant and the steam generator. The main driving force for corrosion product transport in the PWR primary coolant comes from coolant temperature change throughout the system and a subsequent change in corrosion product solubility. As the coolant temperature changes around the PWR primary circuit, saturation status of the corrosion products in the coolant also changes such that under-saturation in steam generator and super-saturation in the core. CRUDTRAN code was evaluated by comparison with the results of the in-reactor loop tests simulating the PWR primary coolant system and PWR plant data. It showed that CRUDTRAN could predict variations of cobalt-58 and cobalt-60 radioactivity with time, plant cycle and coolant chemistry in the PWR plant. (author)

  14. Speech coding code- excited linear prediction

    CERN Document Server

    Bäckström, Tom

    2017-01-01

    This book provides scientific understanding of the most central techniques used in speech coding both for advanced students as well as professionals with a background in speech audio and or digital signal processing. It provides a clear connection between the whys hows and whats thus enabling a clear view of the necessity purpose and solutions provided by various tools as well as their strengths and weaknesses in each respect Equivalently this book sheds light on the following perspectives for each technology presented Objective What do we want to achieve and especially why is this goal important Resource Information What information is available and how can it be useful and Resource Platform What kind of platforms are we working with and what are their capabilities restrictions This includes computational memory and acoustic properties and the transmission capacity of devices used. The book goes on to address Solutions Which solutions have been proposed and how can they be used to reach the stated goals and ...

  15. Comparison of the Aerospace Systems Test Reactor loss-of-coolant test data with predictions of the 3D-AIRLOCA code

    International Nuclear Information System (INIS)

    Warinner, D.K.

    1983-01-01

    This paper compares the predictions of the revised 3D-AIRLOCA computer code to those data available from the Aerospace Systems Test Reactor's (ASTR's) loss-of-coolant-accident (LOCA) tests run in 1964. The theoretical and experimental hot-spot temperature responses compare remarkably well. In the thirteen cases studied, the irradiation powers varied from 0.4 to 8.87 MW; the irradiation times were 300, 1540, 1800, and 10 4 s. The degrees of agreement between the data and predictions provide an experimental validation of the 3D-AIRLOCA code

  16. Comparison of the aerospace systems test reactor loss-of-coolant test data with predictions of the 3D-AIRLOCA code

    International Nuclear Information System (INIS)

    Warinner, D.K.

    1984-01-01

    This paper compares the predictions of the revised 3D-AIRLOCA computer code to those data available from the Aerospace Systems Test Reactor's (ASTR's) loss-of-coolant-accident (LOCA) tests run in 1964. The theoretical and experimental hot-spot temperature responses compare remarkably well. In the thirteen cases studied, the irradiation powers varied from 0.4 to 8.87 MW; the irradiation times were 300, 1540, 1800, and 10 4 s. The degrees of agreement between the data and predictions provide an experimental validation of the 3D-AIRLOCA code. (author)

  17. Validation of the ASSERT subchannel code for prediction of CHF in standard and non-standard CANDU bundle geometries

    International Nuclear Information System (INIS)

    Kiteley, J.C.; Carver, M.B.; Zhou, Q.N.

    1993-01-01

    The ASSERT code has been developed to address the three-dimensional computation of flow and phase distribution and fuel element surface temperatures within the horizontal subchannels of CANDU PHWR fuel channels, and to provide a detailed prediction of critical heat flux distribution throughout the bundle. The ASSERT subchannel code has been validated extensively against a wide repertoire of experiments; its combination of three-dimensional prediction of local flow conditions with a comprehensive method of predicting critical heat flux (CHF) at these local conditions makes it a unique tool for predicting CHF for situations outside the existing experimental data base. In particular, ASSERT is the only tool available to systematically investigate CHF under conditions of local geometric variations, such as pressure tube creep and fuel element strain. This paper discusses the numerical methodology used in ASSERT, the constitutive relationships incorporated, and the CHF assessment methodology. The evolutionary validation plan is discussed, and early validation exercises are summarized. The paper concentrates, however, on more recent validation exercises in standard and non-standard geometries. 28 refs., 12 figs

  18. Validation of the ASSERT subchannel code: Prediction of critical heat flux in standard and nonstandard CANDU bundle geometries

    International Nuclear Information System (INIS)

    Carver, M.B.; Kiteley, J.C.; Zhou, R.Q.N.; Junop, S.V.; Rowe, D.S.

    1995-01-01

    The ASSERT code has been developed to address the three-dimensional computation of flow and phase distribution and fuel element surface temperatures within the horizontal subchannels of Canada uranium deuterium (CANDU) pressurized heavy water reactor fuel channels and to provide a detailed prediction of critical heat flux (CHF) distribution throughout the bundle. The ASSERT subchannel code has been validated extensively against a wide repertoire of experiments; its combination of three-dimensional prediction of local flow conditions with a comprehensive method of predicting CHF at these local conditions makes it a unique tool for predicting CHF for situations outside the existing experimental database. In particular, ASSERT is an appropriate tool to systematically investigate CHF under conditions of local geometric variations, such as pressure tube creep and fuel element strain. The numerical methodology used in ASSERT, the constitutive relationships incorporated, and the CHF assessment methodology are discussed. The evolutionary validation plan is also discussed and early validation exercises are summarized. More recent validation exercises in standard and nonstandard geometries are emphasized

  19. Spectral history modeling in the reactor dynamics code DYN3D

    International Nuclear Information System (INIS)

    Bilodid, Yurii

    2014-01-01

    A new method of treating spectral history effects in reactor core calculations was developed and verified in this dissertation. The nature of history effects is a dependence of fuel properties not only on the burnup, but also on the local spectral conditions during burnup. The basic idea of the proposed method is the use of the plutonium-239 concentration as the spectral history indicator. The method was implemented in the reactor dynamics code DYN3D and provides a correction for nodal cross sections according to the local spectral history. A verification of the new method was performed by single-assembly calculations in comparison with results of the lattice code HELIOS. The application of plutonium-based history correction significantly improves the cross section estimation accuracy both for UOX and MOX fuel, with quadratic and hexagonal geometry. The new method was applied to evaluate the influence of history effects on full-core calculation results. Analysis of a PWR equilibrium fuel cycle has shown a significant effect on the axial power distribution during a whole cycle, which causes axial temperature and burnup redistributions. The observed neutron flux redistribution improves neutron economy, so the fuel cycle is longer than in calculations without history corrections. Analyses of hypothetical control rod ejection accidents have shown a minor influence of history effects on the transient course and safety relevant parameters.

  20. Plasma burn-through simulations using the DYON code and predictions for ITER

    International Nuclear Information System (INIS)

    Kim, Hyun-Tae; Sips, A C C; De Vries, P C

    2013-01-01

    This paper will discuss simulations of the full ionization process (i.e. plasma burn-through), fundamental to creating high temperature plasma. By means of an applied electric field, the gas is partially ionized by the electron avalanche process. In order for the electron temperature to increase, the remaining neutrals need to be fully ionized in the plasma burn-through phase, as radiation is the main contribution to the electron power loss. The radiated power loss can be significantly affected by impurities resulting from interaction with the plasma facing components. The DYON code is a plasma burn-through simulator developed at Joint European Torus (JET) (Kim et al and EFDA-JET Contributors 2012 Nucl. Fusion 52 103016, Kim, Sips and EFDA-JET Contributors 2013 Nucl. Fusion 53 083024). The dynamic evolution of the plasma temperature and plasma densities including the impurity content is calculated in a self-consistent way using plasma wall interaction models. The recent installation of a beryllium wall at JET enabled validation of the plasma burn-through model in the presence of new, metallic plasma facing components. The simulation results of the plasma burn-through phase show a consistent good agreement against experiments at JET, and explain differences observed during plasma initiation with the old carbon plasma facing components. In the International Thermonuclear Experimental Reactor (ITER), the allowable toroidal electric field is restricted to 0.35 (V m −1 ), which is significantly lower compared to the typical value (∼1 (V m −1 )) used in the present devices. The limitation on toroidal electric field also reduces the range of other operation parameters during plasma formation in ITER. Thus, predictive simulations of plasma burn-through in ITER using validated model is of crucial importance. This paper provides an overview of the DYON code and the validation, together with new predictive simulations for ITER using the DYON code. (paper)

  1. Comparison of Heavy Water Reactor Thermalhydraulic Code Predictions with Small Break LOCA Experimental Data

    International Nuclear Information System (INIS)

    2012-08-01

    Activities within the frame of the IAEA's Technical Working Group on Advanced Technologies for HWRs (TWG-HWR) are conducted in a project within the IAEA's subprogramme on nuclear power reactor technology development. The objective of the activities on HWRs is to foster, within the frame of the TWG-HWR, information exchange and cooperative research on technology development for current and future HWRs, with an emphasis on safety, economics and fuel resource sustainability. One of the activities recommended by the TWG-HWR was an international standard problem exercise entitled Intercomparison and Validation of Computer Codes for Thermalhydraulics Safety Analyses. Intercomparison and validation of computer codes used in different countries for thermalhydraulics safety analyses will enhance the confidence in the predictions made by these codes. However, the intercomparison and validation exercise needs a set of reliable experimental data. Two RD-14M small break loss of coolant accident (SBLOCA) tests, simulating HWR LOCA behaviour, conducted by Atomic Energy of Canada Ltd (AECL), were selected for this validation project. This report provides a comparison of the results obtained from eight participating organizations from six countries (Argentina, Canada, China, India, Republic of Korea, and Romania), utilizing four different computer codes (ATMIKA, CATHENA, MARS-KS, and RELAP5). General conclusions are reached and recommendations made.

  2. Validation of the assert subchannel code: Prediction of CHF in standard and non-standard Candu bundle geometries

    International Nuclear Information System (INIS)

    Carver, M.B.; Kiteley, J.C.; Zhou, R.Q.N.; Junop, S.V.; Rowe, D.S.

    1993-01-01

    The ASSERT code has been developed to address the three-dimensional computation of flow and phase distribution and fuel element surface temperatures within the horizontal subchannels of CANDU PHWR fuel channels, and to provide a detailed prediction of critical heat flux (CHF) distribution throughout the bundle. The ASSERT subchannel code has been validated extensively against a wide repertoire of experiments; its combination of three-dimensional prediction of local flow conditions with a comprehensive method of prediting CHF at these local conditions, makes it a unique tool for predicting CHF for situations outside the existing experimental data base. In particular, ASSERT is an appropriate tool to systematically investigate CHF under conditions of local geometric variations, such as pressure tube creep and fuel element strain. This paper discusses the numerical methodology used in ASSERT, the constitutive relationships incorporated, and the CHF assessment methodology. The evolutionary validation plan is discussed, and early validation exercises are summarized. The paper concentrates, however, on more recent validation exercises in standard and non-standard geometries

  3. lncRScan-SVM: A Tool for Predicting Long Non-Coding RNAs Using Support Vector Machine.

    Science.gov (United States)

    Sun, Lei; Liu, Hui; Zhang, Lin; Meng, Jia

    2015-01-01

    Functional long non-coding RNAs (lncRNAs) have been bringing novel insight into biological study, however it is still not trivial to accurately distinguish the lncRNA transcripts (LNCTs) from the protein coding ones (PCTs). As various information and data about lncRNAs are preserved by previous studies, it is appealing to develop novel methods to identify the lncRNAs more accurately. Our method lncRScan-SVM aims at classifying PCTs and LNCTs using support vector machine (SVM). The gold-standard datasets for lncRScan-SVM model training, lncRNA prediction and method comparison were constructed according to the GENCODE gene annotations of human and mouse respectively. By integrating features derived from gene structure, transcript sequence, potential codon sequence and conservation, lncRScan-SVM outperforms other approaches, which is evaluated by several criteria such as sensitivity, specificity, accuracy, Matthews correlation coefficient (MCC) and area under curve (AUC). In addition, several known human lncRNA datasets were assessed using lncRScan-SVM. LncRScan-SVM is an efficient tool for predicting the lncRNAs, and it is quite useful for current lncRNA study.

  4. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    Science.gov (United States)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  5. Using clinical data to predict high-cost performance coding issues associated with pressure ulcers: a multilevel cohort model.

    Science.gov (United States)

    Padula, William V; Gibbons, Robert D; Pronovost, Peter J; Hedeker, Donald; Mishra, Manish K; Makic, Mary Beth F; Bridges, John Fp; Wald, Heidi L; Valuck, Robert J; Ginensky, Adam J; Ursitti, Anthony; Venable, Laura Ruth; Epstein, Ziv; Meltzer, David O

    2017-04-01

    Hospital-acquired pressure ulcers (HAPUs) have a mortality rate of 11.6%, are costly to treat, and result in Medicare reimbursement penalties. Medicare codes HAPUs according to Agency for Healthcare Research and Quality Patient-Safety Indicator 3 (PSI-03), but they are sometimes inappropriately coded. The objective is to use electronic health records to predict pressure ulcers and to identify coding issues leading to penalties. We evaluated all hospitalized patient electronic medical records at an academic medical center data repository between 2011 and 2014. These data contained patient encounter level demographic variables, diagnoses, prescription drugs, and provider orders. HAPUs were defined by PSI-03: stages III, IV, or unstageable pressure ulcers not present on admission as a secondary diagnosis, excluding cases of paralysis. Random forests reduced data dimensionality. Multilevel logistic regression of patient encounters evaluated associations between covariates and HAPU incidence. The approach produced a sample population of 21 153 patients with 1549 PSI-03 cases. The greatest odds ratio (OR) of HAPU incidence was among patients diagnosed with spinal cord injury (ICD-9 907.2: OR = 14.3; P  coded for paralysis, leading to a PSI-03 flag. Other high ORs included bed confinement (ICD-9 V49.84: OR = 3.1, P  coded without paralysis, leading to PSI-03 flags. The resulting statistical model can be tested to predict HAPUs during hospitalization. Inappropriate coding of conditions leads to poor hospital performance measures and Medicare reimbursement penalties. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  6. Measurement of low mass muon pairs in sulphur-nucleus collisions with an optimized HELIOS muon spectrometer

    CERN Multimedia

    2002-01-01

    Dileptons provide a unique and specific tool to detect collective behaviour and to probe for signs of quark gluon plasma formation in nucleus-nucleus interactions. In particular, in the low transverse mass region, below the rho meson, dimuons probe the thermal nature of the interaction while their multiplicity dependence can indicate nuclear volume effects. \\\\\\\\This experiment uses the (almost) unchanged HELIOS muon spectrometer and a combination of a new carefully designed light absorber, at an optimized distance from the target, and multiplicity measurements provided by new Silicon ring detectors, covering more than the muon rapidity acceptance. It intends to improve in quality and quantity on the low mass, low $p_{T}$ dimuon signal already observed in the NA34/2 experiment. The wide range of rapidity from 3.5 to 6.0 will enable us to explore the rapidity dependence of the signal from high energy density at nearly central rapidity up to very forward rapidities. \\\\\\\\The commissioning of the new apparatus (...

  7. Analytical prediction of CHF by FIDAS code based on three-fluid and film-dryout model

    International Nuclear Information System (INIS)

    Sugawara, Satoru

    1990-01-01

    Analytical prediction model of critical heat flux (CHF) has been developed on the basis of film dryout criterion due to droplets deposition and entrainment in annular mist flow. Critical heat flux in round tubes were analyzed by the Film Dryout Analysis Code in Subchannels (FIDAS) which is based on the three-fluid, three-field and newly developed film dryout model. Predictions by FIDAS were compared with the world-wide experimental data on CHF obtained in water and Freon for uniformly and non-uniformly heated tubes under vertical upward flow condition. Furthermore, CHF prediction capability of FIDAS was compared with those of other film dryout models for annular flow and Katto's CHF correlation. The predictions of FIDAS are in sufficient agreement with the experimental CHF data, and indicate better agreement than the other film dryout models and empirical correlation of Katto. (author)

  8. An Assessment of Comprehensive Code Prediction State-of-the-Art Using the HART II International Workshop Data

    Science.gov (United States)

    vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.

    2012-01-01

    Despite significant advancements in computational fluid dynamics and their coupling with computational structural dynamics (= CSD, or comprehensive codes) for rotorcraft applications, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this paper, the capabilities of such codes are evaluated using the HART II Inter- national Workshop data base, focusing on a typical descent operating condition which includes strong blade-vortex interactions. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics - especially for the cases with HHC - and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.

  9. The HART II International Workshop: An Assessment of the State-of-the-Art in Comprehensive Code Prediction

    Science.gov (United States)

    vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.

    2013-01-01

    Significant advancements in computational fluid dynamics (CFD) and their coupling with computational structural dynamics (CSD, or comprehensive codes) for rotorcraft applications have been achieved recently. Despite this, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this article, the capabilities of such codes are evaluated using the HART II International Workshop database, focusing on a typical descent operating condition which includes strong blade-vortex interactions. A companion article addresses the CFD/CSD coupled approach. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics-especially for the cases with HHC-and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.

  10. IMPROVED COMPUTATIONAL NEUTRONICS METHODS AND VALIDATION PROTOCOLS FOR THE ADVANCED TEST REACTOR

    Energy Technology Data Exchange (ETDEWEB)

    David W. Nigg; Joseph W. Nielsen; Benjamin M. Chase; Ronnie K. Murray; Kevin A. Steuhm

    2012-04-01

    The Idaho National Laboratory (INL) is in the process of modernizing the various reactor physics modeling and simulation tools used to support operation and safety assurance of the Advanced Test Reactor (ATR). Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core depletion HELIOS calculations for all ATR cycles since August 2009 was successfully completed during 2011. This demonstration supported a decision late in the year to proceed with the phased incorporation of the HELIOS methodology into the ATR fuel cycle management process beginning in 2012. On the experimental side of the project, new hardware was fabricated, measurement protocols were finalized, and the first four of six planned physics code validation experiments based on neutron activation spectrometry were conducted at the ATRC facility. Data analysis for the first three experiments, focused on characterization of the neutron spectrum in one of the ATR flux traps, has been completed. The six experiments will ultimately form the basis for a flexible, easily-repeatable ATR physics code validation protocol that is consistent with applicable ASTM standards.

  11. Development of a Burnup Module DECBURN Based on the Krylov Subspace Method

    Energy Technology Data Exchange (ETDEWEB)

    Cho, J. Y.; Kim, K. S.; Shim, H. J.; Song, J. S

    2008-05-15

    This report is to develop a burnup module DECBURN that is essential for the reactor analysis and the assembly homogenization codes to trace the fuel composition change during the core burnup. The developed burnup module solves the burnup equation by the matrix exponential method based on the Krylov Subspace method. The final solution of the matrix exponential is obtained by the matrix scaling and squaring method. To develop DECBURN module, this report includes the followings as: (1) Krylov Subspace Method for Burnup Equation, (2) Manufacturing of the DECBURN module, (3) Library Structure Setup and Library Manufacturing, (4) Examination of the DECBURN module, (5) Implementation to the DeCART code and Verification. DECBURN library includes the decay constants, one-group cross section and the fission yields. Examination of the DECBURN module is performed by manufacturing a driver program, and the results of the DECBURN module is compared with those of the ORIGEN program. Also, the implemented DECBURN module to the DeCART code is applied to the LWR depletion benchmark and a OPR-1000 pin cell problem, and the solutions are compared with the HELIOS code to verify the computational soundness and accuracy. In this process, the criticality calculation method and the predictor-corrector scheme are introduced to the DeCART code for a function of the homogenization code. The examination by a driver program shows that the DECBURN module produces exactly the same solution with the ORIGEN program. DeCART code that equips the DECBURN module produces a compatible solution to the other codes for the LWR depletion benchmark. Also the multiplication factors of the DeCART code for the OPR-1000 pin cell problem agree to the HELIOS code within 100 pcm over the whole burnup steps. The multiplication factors with the criticality calculation are also compatible with the HELIOS code. These results mean that the developed DECBURN module works soundly and produces an accurate solution

  12. Predicting multiprocessing efficiency on the Cray multiprocessors in a (CTSS) time-sharing environment/application to a 3-D magnetohydrodynamics code

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1988-01-01

    A formula is derived for predicting multiprocessing efficiency on Cray supercomputers equipped with the Cray Time-Sharing System (CTSS). The model is applicable to an intensive time-sharing environment. The actual efficiency estimate depends on three factors: the code size, task length, and job mix. The implementation of multitasking in a three-dimensional plasma magnetohydrodynamics (MHD) code, TEMCO, is discussed. TEMCO solves the primitive one-fluid compressible MHD equations and includes resistive and Hall effects in Ohm's law. Virtually all segments of the main time-integration loop are multitasked. The multiprocessing efficiency model is applied to TEMCO. Excellent agreement is obtained between the actual multiprocessing efficiency and the theoretical prediction

  13. Computer code to predict the heat of explosion of high energy materials

    International Nuclear Information System (INIS)

    Muthurajan, H.; Sivabalan, R.; Pon Saravanan, N.; Talawar, M.B.

    2009-01-01

    The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-a-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion (ΔH e ) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R 2 = 0.9721 with a linear equation y = 0.9262x + 101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials

  14. The effect of turbulent mixing models on the predictions of subchannel codes

    International Nuclear Information System (INIS)

    Tapucu, A.; Teyssedou, A.; Tye, P.; Troche, N.

    1994-01-01

    In this paper, the predictions of the COBRA-IV and ASSERT-4 subchannel codes have been compared with experimental data on void fraction, mass flow rate, and pressure drop obtained for two interconnected subchannels. COBRA-IV is based on a one-dimensional separated flow model with the turbulent intersubchannel mixing formulated as an extension of the single-phase mixing model, i.e. fluctuating equal mass exchange. ASSERT-4 is based on a drift flux model with the turbulent mixing modelled by assuming an exchange of equal volumes with different densities thus allowing a net fluctuating transverse mass flux from one subchannel to the other. This feature is implemented in the constitutive relationship for the relative velocity required by the conservation equations. It is observed that the predictions of ASSERT-4 follow the experimental trends better than COBRA-IV; therefore the approach of equal volume exchange constitutes an improvement over that of the equal mass exchange. ((orig.))

  15. Sparse coding can predict primary visual cortex receptive field changes induced by abnormal visual input.

    Science.gov (United States)

    Hunt, Jonathan J; Dayan, Peter; Goodhill, Geoffrey J

    2013-01-01

    Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields.

  16. Evaluation of Design & Analysis Code, CACTUS, for Predicting Crossflow Hydrokinetic Turbine Performance

    Energy Technology Data Exchange (ETDEWEB)

    Wosnik, Martin [Univ. of New Hampshire, Durham, NH (United States). Center for Ocean Renewable Energy; Bachant, Pete [Univ. of New Hampshire, Durham, NH (United States). Center for Ocean Renewable Energy; Neary, Vincent Sinclair [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Murphy, Andrew W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-09-01

    CACTUS, developed by Sandia National Laboratories, is an open-source code for the design and analysis of wind and hydrokinetic turbines. While it has undergone extensive validation for both vertical axis and horizontal axis wind turbines, and it has been demonstrated to accurately predict the performance of horizontal (axial-flow) hydrokinetic turbines, its ability to predict the performance of crossflow hydrokinetic turbines has yet to be tested. The present study addresses this problem by comparing the predicted performance curves derived from CACTUS simulations of the U.S. Department of Energy’s 1:6 scale reference model crossflow turbine to those derived by experimental measurements in a tow tank using the same model turbine at the University of New Hampshire. It shows that CACTUS cannot accurately predict the performance of this crossflow turbine, raising concerns on its application to crossflow hydrokinetic turbines generally. The lack of quality data on NACA 0021 foil aerodynamic (hydrodynamic) characteristics over the wide range of angles of attack (AoA) and Reynolds numbers is identified as the main cause for poor model prediction. A comparison of several different NACA 0021 foil data sources, derived using both physical and numerical modeling experiments, indicates significant discrepancies at the high AoA experienced by foils on crossflow turbines. Users of CACTUS for crossflow hydrokinetic turbines are, therefore, advised to limit its application to higher tip speed ratios (lower AoA), and to carefully verify the reliability and accuracy of their foil data. Accurate empirical data on the aerodynamic characteristics of the foil is the greatest limitation to predicting performance for crossflow turbines with semi-empirical models like CACTUS. Future improvements of CACTUS for crossflow turbine performance prediction will require the development of accurate foil aerodynamic characteristic data sets within the appropriate ranges of Reynolds numbers and AoA.

  17. Assessment of the prediction capability of the TRANSURANUS fuel performance code on the basis of power ramp tested LWR fuel rods

    International Nuclear Information System (INIS)

    Pastore, G.; Botazzoli, P.; Di Marcello, V.; Luzzi, L.

    2009-01-01

    The present work is aimed at assessing the prediction capability of the TRANSURANUS code for the performance analysis of LWR fuel rods under power ramp conditions. The analysis refers to all the power ramp tested fuel rods belonging to the Studsvik PWR Super-Ramp and BWR Inter-Ramp Irradiation Projects, and is focused on some integral quantities (i.e., burn-up, fission gas release, cladding creep-down and failure due to pellet cladding interaction) through a systematic comparison between the code predictions and the experimental data. To this end, a suitable setup of the code is established on the basis of previous works. Besides, with reference to literature indications, a sensitivity study is carried out, which considers the 'ITU model' for fission gas burst release and modifications in the treatment of the fuel solid swelling and the cladding stress corrosion cracking. The performed analyses allow to individuate some issues, which could be useful for the future development of the code. Keywords: Light Water Reactors, Fuel Rod Performance, Power Ramps, Fission Gas Burst Release, Fuel Swelling, Pellet Cladding Interaction, Stress Corrosion Cracking

  18. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks, R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem

  19. Computational prediction of over-annotated protein-coding genes in the genome of Agrobacterium tumefaciens strain C58

    International Nuclear Information System (INIS)

    Yu Jia-Feng; Sui Tian-Xiang; Wang Ji-Hua; Wang Hong-Mei; Wang Chun-Ling; Jing Li

    2015-01-01

    Agrobacterium tumefaciens strain C58 is a type of pathogen that can cause tumors in some dicotyledonous plants. Ever since the genome of A. tumefaciens strain C58 was sequenced, the quality of annotation of its protein-coding genes has been queried continually, because the annotation varies greatly among different databases. In this paper, the questionable hypothetical genes were re-predicted by integrating the TN curve and Z curve methods. As a result, 30 genes originally annotated as “hypothetical” were discriminated as being non-coding sequences. By testing the re-prediction program 10 times on data sets composed of the function-known genes, the mean accuracy of 99.99% and mean Matthews correlation coefficient value of 0.9999 were obtained. Further sequence analysis and COG analysis showed that the re-annotation results were very reliable. This work can provide an efficient tool and data resources for future studies of A. tumefaciens strain C58. (special topic)

  20. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes (''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Cask,'' R.E. Glass, Sandia National Laboratories, 1985; ''Sample Problem Manual for Benchmarking of Cask Analysis Codes,'' R.E. Glass, Sandia National Laboratories, 1988; ''Standard Thermal Problem Set for the Evaluation of Heat Transfer Codes Used in the Assessment of Transportation Packages, R.E. Glass, et al., Sandia National Laboratories, 1988) used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in ''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks,'' R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem. 6 refs., 5 figs

  1. High Angular Momentum Halo Gas: A Feedback and Code-independent Prediction of LCDM

    Science.gov (United States)

    Stewart, Kyle R.; Maller, Ariyeh H.; Oñorbe, Jose; Bullock, James S.; Joung, M. Ryan; Devriendt, Julien; Ceverino, Daniel; Kereš, Dušan; Hopkins, Philip F.; Faucher-Giguère, Claude-André

    2017-07-01

    We investigate angular momentum acquisition in Milky Way-sized galaxies by comparing five high resolution zoom-in simulations, each implementing identical cosmological initial conditions but utilizing different hydrodynamic codes: Enzo, Art, Ramses, Arepo, and Gizmo-PSPH. Each code implements a distinct set of feedback and star formation prescriptions. We find that while many galaxy and halo properties vary between the different codes (and feedback prescriptions), there is qualitative agreement on the process of angular momentum acquisition in the galaxy’s halo. In all simulations, cold filamentary gas accretion to the halo results in ˜4 times more specific angular momentum in cold halo gas (λ cold ≳ 0.1) than in the dark matter halo. At z > 1, this inflow takes the form of inspiraling cold streams that are co-directional in the halo of the galaxy and are fueled, aligned, and kinematically connected to filamentary gas infall along the cosmic web. Due to the qualitative agreement among disparate simulations, we conclude that the buildup of high angular momentum halo gas and the presence of these inspiraling cold streams are robust predictions of Lambda Cold Dark Matter galaxy formation, though the detailed morphology of these streams is significantly less certain. A growing body of observational evidence suggests that this process is borne out in the real universe.

  2. High Angular Momentum Halo Gas: A Feedback and Code-independent Prediction of LCDM

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Kyle R. [Department of Mathematical Sciences, California Baptist University, 8432 Magnolia Ave., Riverside, CA 92504 (United States); Maller, Ariyeh H. [Department of Physics, New York City College of Technology, 300 Jay St., Brooklyn, NY 11201 (United States); Oñorbe, Jose [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Bullock, James S. [Center for Cosmology, Department of Physics and Astronomy, The University of California at Irvine, Irvine, CA 92697 (United States); Joung, M. Ryan [Department of Astronomy, Columbia University, New York, NY 10027 (United States); Devriendt, Julien [Department of Physics, University of Oxford, The Denys Wilkinson Building, Keble Rd., Oxford OX1 3RH (United Kingdom); Ceverino, Daniel [Zentrum für Astronomie der Universität Heidelberg, Institut für Theoretische Astrophysik, Albert-Ueberle-Str. 2, D-69120 Heidelberg (Germany); Kereš, Dušan [Department of Physics, Center for Astrophysics and Space Sciences, University of California at San Diego, 9500 Gilman Dr., La Jolla, CA 92093 (United States); Hopkins, Philip F. [California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125 (United States); Faucher-Giguère, Claude-André [Department of Physics and Astronomy and CIERA, Northwestern University, 2145 Sheridan Rd., Evanston, IL 60208 (United States)

    2017-07-01

    We investigate angular momentum acquisition in Milky Way-sized galaxies by comparing five high resolution zoom-in simulations, each implementing identical cosmological initial conditions but utilizing different hydrodynamic codes: Enzo, Art, Ramses, Arepo, and Gizmo-PSPH. Each code implements a distinct set of feedback and star formation prescriptions. We find that while many galaxy and halo properties vary between the different codes (and feedback prescriptions), there is qualitative agreement on the process of angular momentum acquisition in the galaxy’s halo. In all simulations, cold filamentary gas accretion to the halo results in ∼4 times more specific angular momentum in cold halo gas ( λ {sub cold} ≳ 0.1) than in the dark matter halo. At z > 1, this inflow takes the form of inspiraling cold streams that are co-directional in the halo of the galaxy and are fueled, aligned, and kinematically connected to filamentary gas infall along the cosmic web. Due to the qualitative agreement among disparate simulations, we conclude that the buildup of high angular momentum halo gas and the presence of these inspiraling cold streams are robust predictions of Lambda Cold Dark Matter galaxy formation, though the detailed morphology of these streams is significantly less certain. A growing body of observational evidence suggests that this process is borne out in the real universe.

  3. Real time implementation of a linear predictive coding algorithm on digital signal processor DSP32C

    International Nuclear Information System (INIS)

    Sheikh, N.M.; Usman, S.R.; Fatima, S.

    2002-01-01

    Pulse Code Modulation (PCM) has been widely used in speech coding. However, due to its high bit rate. PCM has severe limitations in application where high spectral efficiency is desired, for example, in mobile communication, CD quality broadcasting system etc. These limitation have motivated research in bit rate reduction techniques. Linear predictive coding (LPC) is one of the most powerful complex techniques for bit rate reduction. With the introduction of powerful digital signal processors (DSP) it is possible to implement the complex LPC algorithm in real time. In this paper we present a real time implementation of the LPC algorithm on AT and T's DSP32C at a sampling frequency of 8192 HZ. Application of the LPC algorithm on two speech signals is discussed. Using this implementation , a bit rate reduction of 1:3 is achieved for better than tool quality speech, while a reduction of 1.16 is possible for speech quality required in military applications. (author)

  4. Transmutation potential of reactor WWER-440

    International Nuclear Information System (INIS)

    Darilek, P.; Sebian, V.; Necas, V.

    2001-01-01

    Theoretical evaluation of WWER-440 transmutation potential by HELIOS - code is presented. Transmutation method proposal comprising special transmutation pins, combined FA and simple reprocessing is described. Transmutation efficiency of the method is characterized (Authors)

  5. [Symbol: see text]2 Optimized predictive image coding with [Symbol: see text]∞ bound.

    Science.gov (United States)

    Chuah, Sceuchin; Dumitrescu, Sorina; Wu, Xiaolin

    2013-12-01

    In many scientific, medical, and defense applications of image/video compression, an [Symbol: see text]∞ error bound is required. However, pure[Symbol: see text]∞-optimized image coding, colloquially known as near-lossless image coding, is prone to structured errors such as contours and speckles if the bit rate is not sufficiently high; moreover, most of the previous [Symbol: see text]∞-based image coding methods suffer from poor rate control. In contrast, the [Symbol: see text]2 error metric aims for average fidelity and hence preserves the subtlety of smooth waveforms better than the ∞ error metric and it offers fine granularity in rate control, but pure [Symbol: see text]2-based image coding methods (e.g., JPEG 2000) cannot bound individual errors as the [Symbol: see text]∞-based methods can. This paper presents a new compression approach to retain the benefits and circumvent the pitfalls of the two error metrics. A common approach of near-lossless image coding is to embed into a DPCM prediction loop a uniform scalar quantizer of residual errors. The said uniform scalar quantizer is replaced, in the proposed new approach, by a set of context-based [Symbol: see text]2-optimized quantizers. The optimization criterion is to minimize a weighted sum of the [Symbol: see text]2 distortion and the entropy while maintaining a strict [Symbol: see text]∞ error bound. The resulting method obtains good rate-distortion performance in both [Symbol: see text]2 and [Symbol: see text]∞ metrics and also increases the rate granularity. Compared with JPEG 2000, the new method not only guarantees lower [Symbol: see text]∞ error for all bit rates, but also it achieves higher PSNR for relatively high bit rates.

  6. A numerical study of the influence of the void drift model on the predictions of the assert subchannel code

    International Nuclear Information System (INIS)

    Tye, P.; Teyssedou, A.; Troche, N.; Kiteley, J.

    1996-01-01

    One of the factors which is important in order to ensure the continued safe operation of nuclear reactors is the ability to accurately predict the 'Critical Heat Flux' (CHF) throughout the rod bundles in the fuel channel. One method currently used by the Canadian nuclear industry to predict the CHF in the fuel bundles of CANDU reactors is to use the ASSERT subchannel code to predict the local thermal-hydraulic conditions prevailing at each axial location in each subchannel in conjunction with appropriate correlations or the CHF look-up table. The successful application of the above methods depends greatly on the ability of ASSERT to accurately predict the local flow conditions throughout the fuel channel. In this paper, full range qualitative verification tests, using the ASSERT subchannel code are presented which show the influence of the void drift model on the predictions of the local subchannel quality. For typical cases using a 7 rod subset of a full 37 element rod bundle taken from the ASSERT validation database, it will be shown that the void drift term can significantly influence the calculated distribution of the quality in the rod bundle. In order to isolate, as much as possible, the influence of the void drift term this first numerical study is carried out with the rod bundle oriented both vertically and horizontally. Subsequently, additional numerical experiments will be presented which show the influence that the void drift model has on the predicted CHF locations. (author)

  7. A new code for predicting the thermo-mechanical and irradiation behavior of metallic fuels in sodium fast reactors

    Energy Technology Data Exchange (ETDEWEB)

    Karahan, Aydin, E-mail: karahan@mit.ed [Center for Advanced Nuclear Energy Systems, Nuclear Science and Engineering Department, Massachusetts Institute of Technology (United States); Buongiorno, Jacopo [Center for Advanced Nuclear Energy Systems, Nuclear Science and Engineering Department, Massachusetts Institute of Technology (United States)

    2010-01-31

    An engineering code to predict the irradiation behavior of U-Zr and U-Pu-Zr metallic alloy fuel pins and UO{sub 2}-PuO{sub 2} mixed oxide fuel pins in sodium-cooled fast reactors was developed. The code was named Fuel Engineering and Structural analysis Tool (FEAST). FEAST has several modules working in coupled form with an explicit numerical algorithm. These modules describe fission gas release and fuel swelling, fuel chemistry and restructuring, temperature distribution, fuel-clad chemical interaction, and fuel and clad mechanical analysis including transient creep-fracture for the clad. Given the fuel pin geometry, composition and irradiation history, FEAST can analyze fuel and clad thermo-mechanical behavior at both steady-state and design-basis (non-disruptive) transient scenarios. FEAST was written in FORTRAN-90 and has a simple input file similar to that of the LWR fuel code FRAPCON. The metal-fuel version is called FEAST-METAL, and is described in this paper. The oxide-fuel version, FEAST-OXIDE is described in a companion paper. With respect to the old Argonne National Laboratory code LIFE-METAL and other same-generation codes, FEAST-METAL emphasizes more mechanistic, less empirical models, whenever available. Specifically, fission gas release and swelling are modeled with the GRSIS algorithm, which is based on detailed tracking of fission gas bubbles within the metal fuel. Migration of the fuel constituents is modeled by means of thermo-transport theory. Fuel-clad chemical interaction models based on precipitation kinetics were developed for steady-state operation and transients. Finally, a transient intergranular creep-fracture model for the clad, which tracks the nucleation and growth of the cavities at the grain boundaries, was developed for and implemented in the code. Reducing the empiricism in the constitutive models should make it more acceptable to extrapolate FEAST-METAL to new fuel compositions and higher burnup, as envisioned in advanced sodium

  8. A new code for predicting the thermo-mechanical and irradiation behavior of metallic fuels in sodium fast reactors

    International Nuclear Information System (INIS)

    Karahan, Aydin; Buongiorno, Jacopo

    2010-01-01

    An engineering code to predict the irradiation behavior of U-Zr and U-Pu-Zr metallic alloy fuel pins and UO 2 -PuO 2 mixed oxide fuel pins in sodium-cooled fast reactors was developed. The code was named Fuel Engineering and Structural analysis Tool (FEAST). FEAST has several modules working in coupled form with an explicit numerical algorithm. These modules describe fission gas release and fuel swelling, fuel chemistry and restructuring, temperature distribution, fuel-clad chemical interaction, and fuel and clad mechanical analysis including transient creep-fracture for the clad. Given the fuel pin geometry, composition and irradiation history, FEAST can analyze fuel and clad thermo-mechanical behavior at both steady-state and design-basis (non-disruptive) transient scenarios. FEAST was written in FORTRAN-90 and has a simple input file similar to that of the LWR fuel code FRAPCON. The metal-fuel version is called FEAST-METAL, and is described in this paper. The oxide-fuel version, FEAST-OXIDE is described in a companion paper. With respect to the old Argonne National Laboratory code LIFE-METAL and other same-generation codes, FEAST-METAL emphasizes more mechanistic, less empirical models, whenever available. Specifically, fission gas release and swelling are modeled with the GRSIS algorithm, which is based on detailed tracking of fission gas bubbles within the metal fuel. Migration of the fuel constituents is modeled by means of thermo-transport theory. Fuel-clad chemical interaction models based on precipitation kinetics were developed for steady-state operation and transients. Finally, a transient intergranular creep-fracture model for the clad, which tracks the nucleation and growth of the cavities at the grain boundaries, was developed for and implemented in the code. Reducing the empiricism in the constitutive models should make it more acceptable to extrapolate FEAST-METAL to new fuel compositions and higher burnup, as envisioned in advanced sodium reactors

  9. Benchmarking and qualification of the nufreq-npw code for best estimate prediction of multi-channel core stability margins

    International Nuclear Information System (INIS)

    Taleyarkhan, R.; McFarlane, A.F.; Lahey, R.T. Jr.; Podowski, M.Z.

    1988-01-01

    The work described in this paper is focused on the development, verification and benchmarking of the NUFREQ-NPW code at Westinghouse, USA for best estimate prediction of multi-channel core stability margins in US BWRs. Various models incorporated into NUFREQ-NPW are systematically compared against the Westinghouse channel stability analysis code MAZDA, which the Mathematical Model was developed in an entirely different manner. The NUFREQ-NPW code is extensively benchmarked against experimental stability data with and without nuclear reactivity feedback. Detailed comparisons are next performed against nuclear-coupled core stability data. A physically based algorithm is developed to correct for the effect of flow development on subcooled boiling. Use of this algorithm (to be described in the full paper) captures the peak magnitude as well as the resonance frequency with good accuracy

  10. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  11. Accurate prediction of the toxicity of benzoic acid compounds in mice via oral without using any computer codes

    International Nuclear Information System (INIS)

    Keshavarz, Mohammad Hossein; Gharagheizi, Farhad; Shokrolahi, Arash; Zakinejad, Sajjad

    2012-01-01

    Highlights: ► A novel method is introduced for desk calculation of toxicity of benzoic acid derivatives. ► There is no need to use QSAR and QSTR methods, which are based on computer codes. ► The predicted results of 58 compounds are more reliable than those predicted by QSTR method. ► The present method gives good predictions for further 324 benzoic acid compounds. - Abstract: Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD 50 with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure–toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model.

  12. Computational prediction of over-annotated protein-coding genes in the genome of Agrobacterium tumefaciens strain C58

    Science.gov (United States)

    Yu, Jia-Feng; Sui, Tian-Xiang; Wang, Hong-Mei; Wang, Chun-Ling; Jing, Li; Wang, Ji-Hua

    2015-12-01

    Agrobacterium tumefaciens strain C58 is a type of pathogen that can cause tumors in some dicotyledonous plants. Ever since the genome of A. tumefaciens strain C58 was sequenced, the quality of annotation of its protein-coding genes has been queried continually, because the annotation varies greatly among different databases. In this paper, the questionable hypothetical genes were re-predicted by integrating the TN curve and Z curve methods. As a result, 30 genes originally annotated as “hypothetical” were discriminated as being non-coding sequences. By testing the re-prediction program 10 times on data sets composed of the function-known genes, the mean accuracy of 99.99% and mean Matthews correlation coefficient value of 0.9999 were obtained. Further sequence analysis and COG analysis showed that the re-annotation results were very reliable. This work can provide an efficient tool and data resources for future studies of A. tumefaciens strain C58. Project supported by the National Natural Science Foundation of China (Grant Nos. 61302186 and 61271378) and the Funding from the State Key Laboratory of Bioelectronics of Southeast University.

  13. Aztheca Code; Codigo Aztheca

    Energy Technology Data Exchange (ETDEWEB)

    Quezada G, S.; Espinosa P, G. [Universidad Autonoma Metropolitana, Unidad Iztapalapa, San Rafael Atlixco No. 186, Col. Vicentina, 09340 Ciudad de Mexico (Mexico); Centeno P, J.; Sanchez M, H., E-mail: sequga@gmail.com [UNAM, Facultad de Ingenieria, Ciudad Universitaria, Circuito Exterior s/n, 04510 Ciudad de Mexico (Mexico)

    2017-09-15

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  14. Improved Intra-coding Methods for H.264/AVC

    Directory of Open Access Journals (Sweden)

    Li Song

    2009-01-01

    Full Text Available The H.264/AVC design adopts a multidirectional spatial prediction model to reduce spatial redundancy, where neighboring pixels are used as a prediction for the samples in a data block to be encoded. In this paper, a recursive prediction scheme and an enhanced (block-matching algorithm BMA prediction scheme are designed and integrated into the state-of-the-art H.264/AVC framework to provide a new intra coding model. Extensive experiments demonstrate that the coding efficiency can be on average increased by 0.27 dB with comparison to the performance of the conventional H.264 coding model.

  15. Predictions of the spontaneous symmetry-breaking theory for visual code completeness and spatial scaling in single-cell learning rules.

    Science.gov (United States)

    Webber, C J

    2001-05-01

    This article shows analytically that single-cell learning rules that give rise to oriented and localized receptive fields, when their synaptic weights are randomly and independently initialized according to a plausible assumption of zero prior information, will generate visual codes that are invariant under two-dimensional translations, rotations, and scale magnifications, provided that the statistics of their training images are sufficiently invariant under these transformations. Such codes span different image locations, orientations, and size scales with equal economy. Thus, single-cell rules could account for the spatial scaling property of the cortical simple-cell code. This prediction is tested computationally by training with natural scenes; it is demonstrated that a single-cell learning rule can give rise to simple-cell receptive fields spanning the full range of orientations, image locations, and spatial frequencies (except at the extreme high and low frequencies at which the scale invariance of the statistics of digitally sampled images must ultimately break down, because of the image boundary and the finite pixel resolution). Thus, no constraint on completeness, or any other coupling between cells, is necessary to induce the visual code to span wide ranges of locations, orientations, and size scales. This prediction is made using the theory of spontaneous symmetry breaking, which we have previously shown can also explain the data-driven self-organization of a wide variety of transformation invariances in neurons' responses, such as the translation invariance of complex cell response.

  16. CCFL in hot legs and steam generators and its prediction with the CATHARE code

    International Nuclear Information System (INIS)

    Geffraye, G.; Bazin, P.; Pichon, P.

    1995-01-01

    This paper presents a study about the Counter-Current Flow Limitation (CCFL) prediction in hot legs and steam generators (SG) in both system test facilities and pressurized water reactors. Experimental data are analyzed, particularly the recent MHYRESA test data. Geometrical and scale effects on the flooding behavior are shown. The CATHARE code modelling problems concerning the CCFL prediction are discussed. A method which gives the user the possibility of controlling the flooding limit at a given location is developed. In order to minimize the user effect, a methodology is proposed to the user in case of a calculation with a counter-current flow between the upper plenum and the SF U-tubes. The following questions have to be made clear for the user: when to use the CATHARE CCFL option, which correlation to use, and where to locate the flooding limit

  17. CCFL in hot legs and steam generators and its prediction with the CATHARE code

    Energy Technology Data Exchange (ETDEWEB)

    Geffraye, G.; Bazin, P.; Pichon, P. [CEA/DRN/STR, Grenoble (France)

    1995-09-01

    This paper presents a study about the Counter-Current Flow Limitation (CCFL) prediction in hot legs and steam generators (SG) in both system test facilities and pressurized water reactors. Experimental data are analyzed, particularly the recent MHYRESA test data. Geometrical and scale effects on the flooding behavior are shown. The CATHARE code modelling problems concerning the CCFL prediction are discussed. A method which gives the user the possibility of controlling the flooding limit at a given location is developed. In order to minimize the user effect, a methodology is proposed to the user in case of a calculation with a counter-current flow between the upper plenum and the SF U-tubes. The following questions have to be made clear for the user: when to use the CATHARE CCFL option, which correlation to use, and where to locate the flooding limit.

  18. RBMK fuel channel blockage analysis by MCNP5, DRAGON and RELAP5-3D codes

    International Nuclear Information System (INIS)

    Parisi, C.; D'Auria, F.

    2007-01-01

    The aim of this work was to perform precise criticality analyses by Monte-Carlo code MCNP5 for a Fuel Channel (FC) flow blockage accident, considering as calculation domain a single FC and a 3x3 lattice of RBMK cells. Boundary conditions for MCNP5 input were derived by a previous transient calculation by state-of-the-art codes HELIOS/RELAP5-3D. In a preliminary phase, suitable MCNP5 models of a single cell and of a small lattice of RBMK cells were set-up; criticality analyses were performed at reference conditions for 2.0% and 2.4% enriched fuel. These analyses were compared with results obtained by University of Pisa (UNIPI) using deterministic transport code DRAGON and with results obtained by NIKIET Institute using MCNP4C. Then, the changes of the main physical parameters (e.g. fuel and water/steam temperature, water density, graphite temperature) at different time intervals of the FC blockage transient were evaluated by a RELAP5-3D calculation. This information was used to set up further MCNP5 inputs. Criticality analyses were performed for different systems (single channel and lattice) at those transient' states, obtaining global criticality versus transient time. Finally the weight of each parameter's change (fuel overheating and channel voiding) on global criticality was assessed. The results showed that reactivity of a blocked FC is always negative; nevertheless, when considering the effect of neighboring channels, the global reactivity trend reverts, becoming slightly positive or not changing at all, depending in inverse relation to the fuel enrichment. (author)

  19. Prediction of detonation and JWL eos parameters of energetic materials using EXPLO5 computer code

    CSIR Research Space (South Africa)

    Peter, Xolani

    2016-09-01

    Full Text Available Ballistic Organization Cape Town, South Africa 27-29 September 2016 1 PREDICTION OF DETONATION AND JWL EOS PARAMETERS OF ENERGETIC MATERIALS USING EXPLO5 COMPUTER CODE X. Peter*, Z. Jiba, M. Olivier, I.M. Snyman, F.J. Mostert and T.J. Sono.... Nowadays many numerical methods and programs are being used for carrying out thermodynamic calculations of the detonation parameters of condensed explosives, for example a BKW Fortran (Mader, 1967), Ruby (Cowperthwaite and Zwisler, 1974) TIGER...

  20. Computational methods and implementation of the 3-D PWR core dynamics SIMTRAN code for online surveillance and prediction

    International Nuclear Information System (INIS)

    Aragones, J.M.; Ahnert, C.

    1995-01-01

    New computational methods have been developed in our 3-D PWR core dynamics SIMTRAN code for online surveillance and prediction. They improve the accuracy and efficiency of the coupled neutronic-thermalhydraulic solution and extend its scope to provide, mainly, the calculation of: the fission reaction rates at the incore mini-detectors; the responses at the excore detectors (power range); the temperatures at the thermocouple locations; and the in-vessel distribution of the loop cold-leg inlet coolant conditions in the reflector and core channels, and to the hot-leg outlets per loop. The functional capabilities implemented in the extended SIMTRAN code for online utilization include: online surveillance, incore-excore calibration, evaluation of peak power factors and thermal margins, nominal update and cycle follow, prediction of maneuvers and diagnosis of fast transients and oscillations. The new code has been installed at the Vandellos-II PWR unit in Spain, since the startup of its cycle 7 in mid-June, 1994. The computational implementation has been performed on HP-700 workstations under the HP-UX Unix system, including the machine-man interfaces for online acquisition of measured data and interactive graphical utilization, in C and X11. The agreement of the simulated results with the measured data, during the startup tests and first months of actual operation, is well within the accuracy requirements. The performance and usefulness shown during the testing and demo phase, to be extended along this cycle, has proved that SIMTRAN and the man-machine graphic user interface have the qualities for a fast, accurate, user friendly, reliable, detailed and comprehensive online core surveillance and prediction

  1. Thorium fuel cycle management

    International Nuclear Information System (INIS)

    Zajac, R.; Darilek, P.; Breza, J.; Necas, V.

    2010-01-01

    In this presentation author deals with the thorium fuel cycle management. Description of the thorium fuels and thorium fuel cycle benefits and challenges as well as thorium fuel calculations performed by the computer code HELIOS are presented.

  2. A stochastic-deterministic approach for evaluation of uncertainty in the predicted maximum fuel bundle enthalpy in a CANDU postulated LBLOCA event

    Energy Technology Data Exchange (ETDEWEB)

    Serghiuta, D.; Tholammakkil, J.; Shen, W., E-mail: Dumitru.Serghiuta@cnsc-ccsn.gc.ca [Canadian Nuclear Safety Commission, Ottawa, Ontario (Canada)

    2014-07-01

    A stochastic-deterministic approach based on representation of uncertainties by subjective probabilities is proposed for evaluation of bounding values of functional failure probability and assessment of probabilistic safety margins. The approach is designed for screening and limited independent review verification. Its application is illustrated for a postulated generic CANDU LBLOCA and evaluation of the possibility distribution function of maximum bundle enthalpy considering the reactor physics part of LBLOCA power pulse simulation only. The computer codes HELIOS and NESTLE-CANDU were used in a stochastic procedure driven by the computer code DAKOTA to simulate the LBLOCA power pulse using combinations of core neutronic characteristics randomly generated from postulated subjective probability distributions with deterministic constraints and fixed transient bundle-wise thermal hydraulic conditions. With this information, a bounding estimate of functional failure probability using the limit for the maximum fuel bundle enthalpy can be derived for use in evaluation of core damage frequency. (author)

  3. Bayesian decision support for coding occupational injury data.

    Science.gov (United States)

    Nanda, Gaurav; Grattan, Kathleen M; Chu, MyDzung T; Davis, Letitia K; Lehto, Mark R

    2016-06-01

    Studies on autocoding injury data have found that machine learning algorithms perform well for categories that occur frequently but often struggle with rare categories. Therefore, manual coding, although resource-intensive, cannot be eliminated. We propose a Bayesian decision support system to autocode a large portion of the data, filter cases for manual review, and assist human coders by presenting them top k prediction choices and a confusion matrix of predictions from Bayesian models. We studied the prediction performance of Single-Word (SW) and Two-Word-Sequence (TW) Naïve Bayes models on a sample of data from the 2011 Survey of Occupational Injury and Illness (SOII). We used the agreement in prediction results of SW and TW models, and various prediction strength thresholds for autocoding and filtering cases for manual review. We also studied the sensitivity of the top k predictions of the SW model, TW model, and SW-TW combination, and then compared the accuracy of the manually assigned codes to SOII data with that of the proposed system. The accuracy of the proposed system, assuming well-trained coders reviewing a subset of only 26% of cases flagged for review, was estimated to be comparable (86.5%) to the accuracy of the original coding of the data set (range: 73%-86.8%). Overall, the TW model had higher sensitivity than the SW model, and the accuracy of the prediction results increased when the two models agreed, and for higher prediction strength thresholds. The sensitivity of the top five predictions was 93%. The proposed system seems promising for coding injury data as it offers comparable accuracy and less manual coding. Accurate and timely coded occupational injury data is useful for surveillance as well as prevention activities that aim to make workplaces safer. Copyright © 2016 Elsevier Ltd and National Safety Council. All rights reserved.

  4. The representatives of the various intersubchannel transfer mechanisms and their effects on the predictions of the ASSERT-4 subchannel code

    Energy Technology Data Exchange (ETDEWEB)

    Tye, P [Ecole Polytechnique, Montreal, PQ (Canada)

    1994-12-31

    In this paper, effects of that the constitutive relations used to represent some of the intersubchannel transfer mechanisms have on the predictions of the ASSERT-4 subchannel code for horizontal flows are examined. In particular the choices made in the representation of the gravity driven phase separation phenomena, which is unique to the horizontal fuel channel arrangement seen in CANDU reactors, are analyzed. This is done by comparing the predictions of the ASSERT-4 subchannel code with experimental data on void fraction, mass flow rate, and pressure drop obtained for two horizontal interconnected subchannels. ASSERT-4, the subchannel code used by the Canadian nuclear industry, uses an advanced drift flux model which permits departure from both thermal and mechanical equilibrium between the phases to be accurately modeled. In particular ASSERT-4 contains models for the buoyancy effects which cause phase separation between adjacent subchannels in horizontal flows. This feature, which is of great importance in the subchannel analysis of CANDU reactors, is implemented in the constitutive relationship for the relative velocity required by the conservation equations. In order to, as much as is physically possible, isolate different inter-subchannel transfer mechanisms, three different subchannel orientations are analyzed. These are: the two subchannels at the same elevation, the high void subchannel below the low void subchannel, and the high void subchannel above the low void subchannel. It is observed that for all three subchannel orientations ASSERT-4 does a reasonably good job of predicting the experimental trends. However, certain modifications to the representation of the gravitational phase separation effects which seem to improve the overall predictions are suggested. (author). 12 refs., 12 figs.

  5. Assessment of 12 CHF prediction methods, for an axially non-uniform heat flux distribution, with the RELAP5 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Ferrouk, M. [Laboratoire du Genie Physique des Hydrocarbures, University of Boumerdes, Boumerdes 35000 (Algeria)], E-mail: m_ferrouk@yahoo.fr; Aissani, S. [Laboratoire du Genie Physique des Hydrocarbures, University of Boumerdes, Boumerdes 35000 (Algeria); D' Auria, F.; DelNevo, A.; Salah, A. Bousbia [Dipartimento di Ingegneria Meccanica, Nucleare e della Produzione, Universita di Pisa (Italy)

    2008-10-15

    The present article covers the evaluation of the performance of twelve critical heat flux methods/correlations published in the open literature. The study concerns the simulation of an axially non-uniform heat flux distribution with the RELAP5 computer code in a single boiling water reactor channel benchmark problem. The nodalization scheme employed for the considered particular geometry, as modelled in RELAP5 code, is described. For this purpose a review of critical heat flux models/correlations applicable to non-uniform axial heat profile is provided. Simulation results using the RELAP5 code and those obtained from our computer program, based on three type predictions methods such as local conditions, F-factor and boiling length average approaches were compared.

  6. The Interaction between Interoceptive and Action States within a Framework of Predictive Coding

    Science.gov (United States)

    Marshall, Amanda C.; Gentsch, Antje; Schütz-Bosbach, Simone

    2018-01-01

    The notion of predictive coding assumes that perception is an iterative process between prior knowledge and sensory feedback. To date, this perspective has been primarily applied to exteroceptive perception as well as action and its associated phenomenological experiences such as agency. More recently, this predictive, inferential framework has been theoretically extended to interoception. This idea postulates that subjective feeling states are generated by top–down inferences made about internal and external causes of interoceptive afferents. While the processing of motor signals for action control and the emergence of selfhood have been studied extensively, the contributions of interoceptive input and especially the potential interaction of motor and interoceptive signals remain largely unaddressed. Here, we argue for a specific functional relation between motor and interoceptive awareness. Specifically, we implicate interoceptive predictions in the generation of subjective motor-related feeling states. Furthermore, we propose a distinction between reflexive and pre-reflexive modes of agentic action control and suggest that interoceptive input may affect each differently. Finally, we advocate the necessity of continuous interoceptive input for conscious forms of agentic action control. We conclude by discussing further research contributions that would allow for a fuller understanding of the interaction between agency and interoceptive awareness. PMID:29515495

  7. REVISED STREAM CODE AND WASP5 BENCHMARK

    International Nuclear Information System (INIS)

    Chen, K

    2005-01-01

    STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within ±20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within ±3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls

  8. Preliminary results of the seventh three-dimensional AER dynamic benchmark problem calculation. Solution with DYN3D and RELAP5-3D codes

    International Nuclear Information System (INIS)

    Bencik, M.; Hadek, J.

    2011-01-01

    The paper gives a brief survey of the seventh three-dimensional AER dynamic benchmark calculation results received with the codes DYN3D and RELAP5-3D at Nuclear Research Institute Rez. This benchmark was defined at the twentieth AER Symposium in Hanassari (Finland). It is focused on investigation of transient behaviour in a WWER-440 nuclear power plant. Its initiating event is opening of the main isolation valve and re-connection of the loop with its main circulation pump in operation. The WWER-440 plant is at the end of the first fuel cycle and in hot full power conditions. Stationary and burnup calculations were performed with the code DYN3D. Transient calculation was made with the system code RELAP5-3D. The two-group homogenized cross sections library HELGD05 created by HELIOS code was used for the generation of reactor core neutronic parameters. The detailed six loops model of NPP Dukovany was adopted for the seventh AER dynamic benchmark purposes. The RELAP5-3D full core neutronic model was coupled with 49 core thermal-hydraulic channels and 8 reflector channels connected with the three-dimensional model of the reactor vessel. The detailed nodalization of reactor downcomer, lower and upper plenum was used. Mixing in lower and upper plenum was simulated. The first part of paper contains a brief characteristic of RELAP5-3D system code and a short description of NPP input deck and reactor core model. The second part shows the time dependencies of important global and local parameters. (Authors)

  9. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  10. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  11. Validation of thermalhydraulic codes

    International Nuclear Information System (INIS)

    Wilkie, D.

    1992-01-01

    Thermalhydraulic codes require to be validated against experimental data collected over a wide range of situations if they are to be relied upon. A good example is provided by the nuclear industry where codes are used for safety studies and for determining operating conditions. Errors in the codes could lead to financial penalties, to the incorrect estimation of the consequences of accidents and even to the accidents themselves. Comparison between prediction and experiment is often described qualitatively or in approximate terms, e.g. ''agreement is within 10%''. A quantitative method is preferable, especially when several competing codes are available. The codes can then be ranked in order of merit. Such a method is described. (Author)

  12. Benchmark calculation of subchannel analysis codes

    International Nuclear Information System (INIS)

    1996-02-01

    In order to evaluate the analysis capabilities of various subchannel codes used in thermal-hydraulic design of light water reactors, benchmark calculations were performed. The selected benchmark problems and major findings obtained by the calculations were as follows: (1)As for single-phase flow mixing experiments between two channels, the calculated results of water temperature distribution along the flow direction were agreed with experimental results by tuning turbulent mixing coefficients properly. However, the effect of gap width observed in the experiments could not be predicted by the subchannel codes. (2)As for two-phase flow mixing experiments between two channels, in high water flow rate cases, the calculated distributions of air and water flows in each channel were well agreed with the experimental results. In low water flow cases, on the other hand, the air mixing rates were underestimated. (3)As for two-phase flow mixing experiments among multi-channels, the calculated mass velocities at channel exit under steady-state condition were agreed with experimental values within about 10%. However, the predictive errors of exit qualities were as high as 30%. (4)As for critical heat flux(CHF) experiments, two different results were obtained. A code indicated that the calculated CHF's using KfK or EPRI correlations were well agreed with the experimental results, while another code suggested that the CHF's were well predicted by using WSC-2 correlation or Weisman-Pei mechanistic model. (5)As for droplets entrainment and deposition experiments, it was indicated that the predictive capability was significantly increased by improving correlations. On the other hand, a remarkable discrepancy between codes was observed. That is, a code underestimated the droplet flow rate and overestimated the liquid film flow rate in high quality cases, while another code overestimated the droplet flow rate and underestimated the liquid film flow rate in low quality cases. (J.P.N.)

  13. Code package to analyse behavior of the WWER fuel rods in normal operation: TOPRA's code

    International Nuclear Information System (INIS)

    Scheglov, A.; Proselkov, V.

    2001-01-01

    This paper briefly describes the code package intended for analysis of WWER fuel rod characteristics. The package includes two computer codes: TOPRA-1 and TOPRA-2 for full-scale fuel rod analyses; MRZ and MKK codes for analyzing the separate sections of fuel rods in r-z and r-j geometry. The TOPRA's codes are developed on the base of PIN-mod2 version and verified against experimental results obtained in MR, MIR and Halden research reactors (in the framework of SOFIT, FGR-2 and FUMEX experimental programs). Comparative analysis of calculation results and results from post-reactor examination of the WWER-440 and WWER-1000 fuel rod are also made as additional verification of these codes. To avoid the enlarging of uncertainties in fuel behavior prediction as a result of simplifying of the fuel geometry, MKK and MRZ codes are developed on the basis of the finite element method with use of the three nodal finite elements. Results obtained in the course of the code verification indicate the possibility for application of the method and TOPRA's code for simplified engineering calculations of WWER fuel rods thermal-physical parameters. An analysis of maximum relative errors for predicting of the fuel rod characteristics in the range of the accepted parameter values is also presented in the paper

  14. A Bipartite Network-based Method for Prediction of Long Non-coding RNA–protein Interactions

    Directory of Open Access Journals (Sweden)

    Mengqu Ge

    2016-02-01

    Full Text Available As one large class of non-coding RNAs (ncRNAs, long ncRNAs (lncRNAs have gained considerable attention in recent years. Mutations and dysfunction of lncRNAs have been implicated in human disorders. Many lncRNAs exert their effects through interactions with the corresponding RNA-binding proteins. Several computational approaches have been developed, but only few are able to perform the prediction of these interactions from a network-based point of view. Here, we introduce a computational method named lncRNA–protein bipartite network inference (LPBNI. LPBNI aims to identify potential lncRNA–interacting proteins, by making full use of the known lncRNA–protein interactions. Leave-one-out cross validation (LOOCV test shows that LPBNI significantly outperforms other network-based methods, including random walk (RWR and protein-based collaborative filtering (ProCF. Furthermore, a case study was performed to demonstrate the performance of LPBNI using real data in predicting potential lncRNA–interacting proteins.

  15. A proposed framework for computational fluid dynamics code calibration/validation

    International Nuclear Information System (INIS)

    Oberkampf, W.L.

    1993-01-01

    The paper reviews the terminology and methodology that have been introduced during the last several years for building confidence n the predictions from Computational Fluid Dynamics (CID) codes. Code validation terminology developed for nuclear reactor analyses and aerospace applications is reviewed and evaluated. Currently used terminology such as ''calibrated code,'' ''validated code,'' and a ''validation experiment'' is discussed along with the shortcomings and criticisms of these terms. A new framework is proposed for building confidence in CFD code predictions that overcomes some of the difficulties of past procedures and delineates the causes of uncertainty in CFD predictions. Building on previous work, new definitions of code verification and calibration are proposed. These definitions provide more specific requirements for the knowledge level of the flow physics involved and the solution accuracy of the given partial differential equations. As part of the proposed framework, categories are also proposed for flow physics research, flow modeling research, and the application of numerical predictions. The contributions of physical experiments, analytical solutions, and other numerical solutions are discussed, showing that each should be designed to achieve a distinctively separate purpose in building confidence in accuracy of CFD predictions. A number of examples are given for each approach to suggest methods for obtaining the highest value for CFD code quality assurance

  16. Distributed video coding with multiple side information

    DEFF Research Database (Denmark)

    Huang, Xin; Brites, C.; Ascenso, J.

    2009-01-01

    Distributed Video Coding (DVC) is a new video coding paradigm which mainly exploits the source statistics at the decoder based on the availability of some decoder side information. The quality of the side information has a major impact on the DVC rate-distortion (RD) performance in the same way...... the quality of the predictions had a major impact in predictive video coding. In this paper, a DVC solution exploiting multiple side information is proposed; the multiple side information is generated by frame interpolation and frame extrapolation targeting to improve the side information of a single...

  17. Predicting tritium movement and inventory in fusion reactor subsystems using the TMAP code

    International Nuclear Information System (INIS)

    Jones, J.L.; Merrill, B.J.; Holland, D.F.

    1985-01-01

    The Fusion Safety Program of EG and G Idaho, Inc. at the Idaho National Engineering Laboratory (INEL) is developing a safety analysis code called TMAP (Tritium Migration Analysis Program) to analyze tritium loss from fusion systems during normal and off-normal conditions. TMAP is a one-dimensional code that calculated tritium movement and inventories in a system of interconnected enclosures and wall structures. These wall structures can include composite materials with bulk trapping of the permeating tritium on impurities or radiation induced dislocations within the material. The thermal response of a structure can be modeled to provide temperature information required for tritium movement calculations. Chemical reactions and hydrogen isotope movement can also be included in the calculations. TWAP was used to analyze the movement of tritium implanted into a proposed limiter/first wall structure design. This structure was composed of composite layers of vanadium and stainless steel. Included in these calculations was the effect of contrasting material tritium solubility at the composite interface. In addition, TMAP was used to investigate the rate of tritium cleanup after an accidental release into the atmosphere of a reactor building. Tritium retention and release from surfaces and conversion to the oxide form was predicted

  18. Hybrid Video Coding Based on Bidimensional Matching Pursuit

    Directory of Open Access Journals (Sweden)

    Lorenzo Granai

    2004-12-01

    Full Text Available Hybrid video coding combines together two stages: first, motion estimation and compensation predict each frame from the neighboring frames, then the prediction error is coded, reducing the correlation in the spatial domain. In this work, we focus on the latter stage, presenting a scheme that profits from some of the features introduced by the standard H.264/AVC for motion estimation and replaces the transform in the spatial domain. The prediction error is so coded using the matching pursuit algorithm which decomposes the signal over an appositely designed bidimensional, anisotropic, redundant dictionary. Comparisons are made among the proposed technique, H.264, and a DCT-based coding scheme. Moreover, we introduce fast techniques for atom selection, which exploit the spatial localization of the atoms. An adaptive coding scheme aimed at optimizing the resource allocation is also presented, together with a rate-distortion study for the matching pursuit algorithm. Results show that the proposed scheme outperforms the standard DCT, especially at very low bit rates.

  19. APR1400 Containment Simulation with CONTAIN code

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Moon Kyu; Chung, Bub Dong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-05-15

    The more realistic containment pressure variation predicted by the CONTAIN code through the coupled analysis during a large break loss of coolant accident in the nuclear power plant is expected to provide more accurate prediction for the plant behavior than a standalone MARS-KS calculation. The input deck has been generated based on the already available ARP- 1400 input for CONTEMPT code. Similarly to the CONTEMPT input deck, a simple two-cell model was adopted to model the containment behavior, one cell for the containment inner volume and another cell for the environment condition. The developed input for the CONTAIN code is to be eventually applied for the coupled code calculation of MARS-KS/CONTAIN

  20. APR1400 Containment Simulation with CONTAIN code

    International Nuclear Information System (INIS)

    Hwang, Moon Kyu; Chung, Bub Dong

    2010-01-01

    The more realistic containment pressure variation predicted by the CONTAIN code through the coupled analysis during a large break loss of coolant accident in the nuclear power plant is expected to provide more accurate prediction for the plant behavior than a standalone MARS-KS calculation. The input deck has been generated based on the already available ARP- 1400 input for CONTEMPT code. Similarly to the CONTEMPT input deck, a simple two-cell model was adopted to model the containment behavior, one cell for the containment inner volume and another cell for the environment condition. The developed input for the CONTAIN code is to be eventually applied for the coupled code calculation of MARS-KS/CONTAIN

  1. Physical models and codes for prediction of activity release from defective fuel rods under operation conditions and in leakage tests during refuelling

    International Nuclear Information System (INIS)

    Likhanskii, V.; Evdokimov, I.; Khoruzhii, O.; Sorokin, A.; Novikov, V.

    2003-01-01

    It is appropriate to use the dependences, based on physical models, in the design-analytical codes for improving of reliability of defective fuel rod detection and for determination of defect characteristics by activity measuring in the primary coolant. In the paper the results on development of some physical models and integral mechanistic codes, assigned for prediction of defective fuel rod behaviour are presented. The analysis of mass transfer and mass exchange between fuel rod and coolant showed that the rates of these processes depends on many factors, such as coolant turbulent flow, pressure, effective hydraulic diameter of defect, fuel rod geometric parameters. The models, which describe these dependences, have been created. The models of thermomechanical fuel behaviour, stable gaseous FP release were modified and new computer code RTOP-CA was created thereupon for description of defective fuel rod behaviour and activity release into the primary coolant. The model of fuel oxidation in in-pile conditions, which includes radiolysis and RTOP-LT after validation of physical models are planned to be used for prediction of defective fuel rods behaviour

  2. Histone modification profiles are predictive for tissue/cell-type specific expression of both protein-coding and microRNA genes

    Directory of Open Access Journals (Sweden)

    Zhang Michael Q

    2011-05-01

    Full Text Available Abstract Background Gene expression is regulated at both the DNA sequence level and through modification of chromatin. However, the effect of chromatin on tissue/cell-type specific gene regulation (TCSR is largely unknown. In this paper, we present a method to elucidate the relationship between histone modification/variation (HMV and TCSR. Results A classifier for differentiating CD4+ T cell-specific genes from housekeeping genes using HMV data was built. We found HMV in both promoter and gene body regions to be predictive of genes which are targets of TCSR. For example, the histone modification types H3K4me3 and H3K27ac were identified as the most predictive for CpG-related promoters, whereas H3K4me3 and H3K79me3 were the most predictive for nonCpG-related promoters. However, genes targeted by TCSR can be predicted using other type of HMVs as well. Such redundancy implies that multiple type of underlying regulatory elements, such as enhancers or intragenic alternative promoters, which can regulate gene expression in a tissue/cell-type specific fashion, may be marked by the HMVs. Finally, we show that the predictive power of HMV for TCSR is not limited to protein-coding genes in CD4+ T cells, as we successfully predicted TCSR targeted genes in muscle cells, as well as microRNA genes with expression specific to CD4+ T cells, by the same classifier which was trained on HMV data of protein-coding genes in CD4+ T cells. Conclusion We have begun to understand the HMV patterns that guide gene expression in both tissue/cell-type specific and ubiquitous manner.

  3. Development of authentication code for multi-access optical code division multiplexing based quantum key distribution

    Science.gov (United States)

    Taiwo, Ambali; Alnassar, Ghusoon; Bakar, M. H. Abu; Khir, M. F. Abdul; Mahdi, Mohd Adzir; Mokhtar, M.

    2018-05-01

    One-weight authentication code for multi-user quantum key distribution (QKD) is proposed. The code is developed for Optical Code Division Multiplexing (OCDMA) based QKD network. A unique address assigned to individual user, coupled with degrading probability of predicting the source of the qubit transmitted in the channel offer excellent secure mechanism against any form of channel attack on OCDMA based QKD network. Flexibility in design as well as ease of modifying the number of users are equally exceptional quality presented by the code in contrast to Optical Orthogonal Code (OOC) earlier implemented for the same purpose. The code was successfully applied to eight simultaneous users at effective key rate of 32 bps over 27 km transmission distance.

  4. Code Development and Analysis Program: developmental checkout of the BEACON/MOD2A code

    International Nuclear Information System (INIS)

    Ramsthaler, J.A.; Lime, J.F.; Sahota, M.S.

    1978-12-01

    A best-estimate transient containment code, BEACON, is being developed by EG and G Idaho, Inc. for the Nuclear Regulatory Commission's reactor safety research program. This is an advanced, two-dimensional fluid flow code designed to predict temperatures and pressures in a dry PWR containment during a hypothetical loss-of-coolant accident. The most recent version of the code, MOD2A, is presently in the final stages of production prior to being released to the National Energy Software Center. As part of the final code checkout, seven sample problems were selected to be run with BEACON/MOD2A

  5. From structure prediction to genomic screens for novel non-coding RNAs.

    Science.gov (United States)

    Gorodkin, Jan; Hofacker, Ivo L

    2011-08-01

    Non-coding RNAs (ncRNAs) are receiving more and more attention not only as an abundant class of genes, but also as regulatory structural elements (some located in mRNAs). A key feature of RNA function is its structure. Computational methods were developed early for folding and prediction of RNA structure with the aim of assisting in functional analysis. With the discovery of more and more ncRNAs, it has become clear that a large fraction of these are highly structured. Interestingly, a large part of the structure is comprised of regular Watson-Crick and GU wobble base pairs. This and the increased amount of available genomes have made it possible to employ structure-based methods for genomic screens. The field has moved from folding prediction of single sequences to computational screens for ncRNAs in genomic sequence using the RNA structure as the main characteristic feature. Whereas early methods focused on energy-directed folding of single sequences, comparative analysis based on structure preserving changes of base pairs has been efficient in improving accuracy, and today this constitutes a key component in genomic screens. Here, we cover the basic principles of RNA folding and touch upon some of the concepts in current methods that have been applied in genomic screens for de novo RNA structures in searches for novel ncRNA genes and regulatory RNA structure on mRNAs. We discuss the strengths and weaknesses of the different strategies and how they can complement each other.

  6. Analysis Code - Data Analysis in 'Leveraging Multiple Statistical Methods for Inverse Prediction in Nuclear Forensics Applications' (LMSMIPNFA) v. 1.0

    Energy Technology Data Exchange (ETDEWEB)

    2018-03-19

    R code that performs the analysis of a data set presented in the paper ‘Leveraging Multiple Statistical Methods for Inverse Prediction in Nuclear Forensics Applications’ by Lewis, J., Zhang, A., Anderson-Cook, C. It provides functions for doing inverse predictions in this setting using several different statistical methods. The data set is a publicly available data set from a historical Plutonium production experiment.

  7. Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.

    Science.gov (United States)

    Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic

    2017-03-01

    Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.

  8. RELAP5/MOD2 code assessment

    International Nuclear Information System (INIS)

    Nithianandan, C.K.; Shah, N.H.; Schomaker, R.J.; Miller, F.R.

    1985-01-01

    Babcock and Wilcox (B and W) has been working with the code developers at EG and G and the US Nuclear Regulatory Commission in assessing the RELAP5/MOD2 computer code for the past year by simulating selected separate-effects tests. The purpose of this assessment has been to evaluate the code for use in MIST (Ref. 2) and OTIS integral system tests simulations and in the prediction of pressurized water reactor transients. B and W evaluated various versions of the code and made recommendations to improve code performance. As a result, the currently released version (cycle 36.1) has been improved considerably over earlier versions. However, further refinements to some of the constitutive models may still be needed to further improve the predictive capability of RELAP5/MOD2. The following versions of the code were evaluated. (1) RELAP/MOD2/Cycle 22 - first released version; (2) YELAP5/Cycle 32 - EG and G test version of RELAP5/MOD2/Cycle 32; (3) RELAP5/MOD2/Cycle 36 - frozen cycle for international code assessment; (4) updates to cycle 36 based on recommendations developed by B and W during the simulation of a Massachusetts Institute of Technology (MIT) pressurizer test; and (5) cycle 36.1 updates received from EG and G

  9. RELAP5/MOD2 code assessment

    Energy Technology Data Exchange (ETDEWEB)

    Nithianandan, C.K.; Shah, N.H.; Schomaker, R.J.; Miller, F.R.

    1985-11-01

    Babcock and Wilcox (B and W) has been working with the code developers at EG and G and the US Nuclear Regulatory Commission in assessing the RELAP5/MOD2 computer code for the past year by simulating selected separate-effects tests. The purpose of this assessment has been to evaluate the code for use in MIST (Ref. 2) and OTIS integral system tests simulations and in the prediction of pressurized water reactor transients. B and W evaluated various versions of the code and made recommendations to improve code performance. As a result, the currently released version (cycle 36.1) has been improved considerably over earlier versions. However, further refinements to some of the constitutive models may still be needed to further improve the predictive capability of RELAP5/MOD2. The following versions of the code were evaluated. (1) RELAP/MOD2/Cycle 22 - first released version; (2) YELAP5/Cycle 32 - EG and G test version of RELAP5/MOD2/Cycle 32; (3) RELAP5/MOD2/Cycle 36 - frozen cycle for international code assessment; (4) updates to cycle 36 based on recommendations developed by B and W during the simulation of a Massachusetts Institute of Technology (MIT) pressurizer test; and (5) cycle 36.1 updates received from EG and G.

  10. High-fidelity plasma codes for burn physics

    Energy Technology Data Exchange (ETDEWEB)

    Cooley, James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Graziani, Frank [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Marinak, Marty [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Murillo, Michael [Michigan State Univ., East Lansing, MI (United States)

    2016-10-19

    Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental data and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.

  11. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  12. Use of AERIN code for determining internal doses of transuranic isotopes

    International Nuclear Information System (INIS)

    King, W.C.

    1980-01-01

    The AERIN computer code is a mathematical expression of the ICRP Lung Model. The code was developed at the Lawrence Livermore National Laboratory to compute the body organ burdens and absorbed radiation doses resulting from the inhalation of transuranic isotopes and to predict the amount of activity excreted in the urine and feces as a function of time. Over forty cases of internal exposure have been studied using the AERIN code. The code, as modified, has proven to be extremely versatile. The case studies presented demonstrate the excellent correlation that can be obtained between code predictions and observed bioassay data. In one case study a discrepancy was observed between an in vivo count of the whole body and the application of the code using urine and fecal data as input. The discrepancy was resolved by in vivo skull counts that showed the code had predicted the correct skeletal burden

  13. Under-coding of secondary conditions in coded hospital health data: Impact of co-existing conditions, death status and number of codes in a record.

    Science.gov (United States)

    Peng, Mingkai; Southern, Danielle A; Williamson, Tyler; Quan, Hude

    2017-12-01

    This study examined the coding validity of hypertension, diabetes, obesity and depression related to the presence of their co-existing conditions, death status and the number of diagnosis codes in hospital discharge abstract database. We randomly selected 4007 discharge abstract database records from four teaching hospitals in Alberta, Canada and reviewed their charts to extract 31 conditions listed in Charlson and Elixhauser comorbidity indices. Conditions associated with the four study conditions were identified through multivariable logistic regression. Coding validity (i.e. sensitivity, positive predictive value) of the four conditions was related to the presence of their associated conditions. Sensitivity increased with increasing number of diagnosis code. Impact of death on coding validity is minimal. Coding validity of conditions is closely related to its clinical importance and complexity of patients' case mix. We recommend mandatory coding of certain secondary diagnosis to meet the need of health research based on administrative health data.

  14. Accuracy improvement of SPACE code using the optimization for CHF subroutine

    International Nuclear Information System (INIS)

    Yang, Chang Keun; Kim, Yo Han; Park, Jong Eun; Ha, Sang Jun

    2010-01-01

    Typically, a subroutine to calculate the CHF (Critical Heat Flux) is loaded in code for safety analysis of nuclear power plant. CHF subroutine calculates CHF phenomenon using arbitrary condition (Temperature, pressure, flow rate, power, etc). When safety analysis for nuclear power plant is performed using major factor, CHF parameter is one of the most important factor. But the subroutines used in most codes, such as Biasi method, etc., estimate some different values from experimental data. Most CHF subroutines in the codes could predict only in their specification area, such as pressure, mass flow, void fraction, etc. Even though the most accurate CHF subroutine is used in the high quality nuclear safety analysis code, it is not assured that the valued predicted values by the subroutine are acceptable out of their application area. To overcome this hardship, various approaches to estimate the CHF have been examined during the code developing stage of SPACE. And the six sigma technique was adopted for the examination as mentioned this study. The objective of this study is to improvement of CHF prediction accuracy for nuclear power plant safety analysis code using the CHF database and Six Sigma technique. Through the study, it was concluded that the six sigma technique was useful to quantify the deviation of prediction values to experimental data and the implemented CHF prediction method in SPACE code had well-predict capabilities compared with those from other methods

  15. PlantRNA_Sniffer: A SVM-Based Workflow to Predict Long Intergenic Non-Coding RNAs in Plants.

    Science.gov (United States)

    Vieira, Lucas Maciel; Grativol, Clicia; Thiebaut, Flavia; Carvalho, Thais G; Hardoim, Pablo R; Hemerly, Adriana; Lifschitz, Sergio; Ferreira, Paulo Cavalcanti Gomes; Walter, Maria Emilia M T

    2017-03-04

    Non-coding RNAs (ncRNAs) constitute an important set of transcripts produced in the cells of organisms. Among them, there is a large amount of a particular class of long ncRNAs that are difficult to predict, the so-called long intergenic ncRNAs (lincRNAs), which might play essential roles in gene regulation and other cellular processes. Despite the importance of these lincRNAs, there is still a lack of biological knowledge and, currently, the few computational methods considered are so specific that they cannot be successfully applied to other species different from those that they have been originally designed to. Prediction of lncRNAs have been performed with machine learning techniques. Particularly, for lincRNA prediction, supervised learning methods have been explored in recent literature. As far as we know, there are no methods nor workflows specially designed to predict lincRNAs in plants. In this context, this work proposes a workflow to predict lincRNAs on plants, considering a workflow that includes known bioinformatics tools together with machine learning techniques, here a support vector machine (SVM). We discuss two case studies that allowed to identify novel lincRNAs, in sugarcane ( Saccharum spp.) and in maize ( Zea mays ). From the results, we also could identify differentially-expressed lincRNAs in sugarcane and maize plants submitted to pathogenic and beneficial microorganisms.

  16. PlantRNA_Sniffer: A SVM-Based Workflow to Predict Long Intergenic Non-Coding RNAs in Plants

    Directory of Open Access Journals (Sweden)

    Lucas Maciel Vieira

    2017-03-01

    Full Text Available Non-coding RNAs (ncRNAs constitute an important set of transcripts produced in the cells of organisms. Among them, there is a large amount of a particular class of long ncRNAs that are difficult to predict, the so-called long intergenic ncRNAs (lincRNAs, which might play essential roles in gene regulation and other cellular processes. Despite the importance of these lincRNAs, there is still a lack of biological knowledge and, currently, the few computational methods considered are so specific that they cannot be successfully applied to other species different from those that they have been originally designed to. Prediction of lncRNAs have been performed with machine learning techniques. Particularly, for lincRNA prediction, supervised learning methods have been explored in recent literature. As far as we know, there are no methods nor workflows specially designed to predict lincRNAs in plants. In this context, this work proposes a workflow to predict lincRNAs on plants, considering a workflow that includes known bioinformatics tools together with machine learning techniques, here a support vector machine (SVM. We discuss two case studies that allowed to identify novel lincRNAs, in sugarcane (Saccharum spp. and in maize (Zea mays. From the results, we also could identify differentially-expressed lincRNAs in sugarcane and maize plants submitted to pathogenic and beneficial microorganisms.

  17. From structure prediction to genomic screens for novel non-coding RNAs.

    Directory of Open Access Journals (Sweden)

    Jan Gorodkin

    2011-08-01

    Full Text Available Non-coding RNAs (ncRNAs are receiving more and more attention not only as an abundant class of genes, but also as regulatory structural elements (some located in mRNAs. A key feature of RNA function is its structure. Computational methods were developed early for folding and prediction of RNA structure with the aim of assisting in functional analysis. With the discovery of more and more ncRNAs, it has become clear that a large fraction of these are highly structured. Interestingly, a large part of the structure is comprised of regular Watson-Crick and GU wobble base pairs. This and the increased amount of available genomes have made it possible to employ structure-based methods for genomic screens. The field has moved from folding prediction of single sequences to computational screens for ncRNAs in genomic sequence using the RNA structure as the main characteristic feature. Whereas early methods focused on energy-directed folding of single sequences, comparative analysis based on structure preserving changes of base pairs has been efficient in improving accuracy, and today this constitutes a key component in genomic screens. Here, we cover the basic principles of RNA folding and touch upon some of the concepts in current methods that have been applied in genomic screens for de novo RNA structures in searches for novel ncRNA genes and regulatory RNA structure on mRNAs. We discuss the strengths and weaknesses of the different strategies and how they can complement each other.

  18. NSURE code

    International Nuclear Information System (INIS)

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  19. A study on the application of CRUDTRAN code in primary systems of domestic pressurized heavy-water reactors for prediction of radiation source term

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jong Soon; Cho, Hoon Jo; Jung, Min Young; Lee, Sang Heon [Dept. of Nuclear Engineering, Chosun University, Gwangju (Korea, Republic of)

    2017-04-15

    The importance of developing a source-term assessment technology has been emphasized owing to the decommissioning of Kori nuclear power plant (NPP) Unit 1 and the increase of deteriorated NPPs. We analyzed the behavioral mechanism of corrosion products in the primary system of a pressurized heavy-water reactor-type NPP. In addition, to check the possibility of applying the CRUDTRAN code to a Canadian Deuterium Uranium Reactor (CANDU)-type NPP, the type was assessed using collected domestic onsite data. With the assessment results, it was possible to predict trends according to operating cycles. Values estimated using the code were similar to the measured values. The results of this study are expected to be used to manage the radiation exposures of operators in high-radiation areas and to predict decommissioning processes in the primary system.

  20. Fire-accident analysis code (FIRAC) verification

    International Nuclear Information System (INIS)

    Nichols, B.D.; Gregory, W.S.; Fenton, D.L.; Smith, P.R.

    1986-01-01

    The FIRAC computer code predicts fire-induced transients in nuclear fuel cycle facility ventilation systems. FIRAC calculates simultaneously the gas-dynamic, material transport, and heat transport transients that occur in any arbitrarily connected network system subjected to a fire. The network system may include ventilation components such as filters, dampers, ducts, and blowers. These components are connected to rooms and corridors to complete the network for moving air through the facility. An experimental ventilation system has been constructed to verify FIRAC and other accident analysis codes. The design emphasizes network system characteristics and includes multiple chambers, ducts, blowers, dampers, and filters. A larger industrial heater and a commercial dust feeder are used to inject thermal energy and aerosol mass. The facility is instrumented to measure volumetric flow rate, temperature, pressure, and aerosol concentration throughout the system. Aerosol release rates and mass accumulation on filters also are measured. We have performed a series of experiments in which a known rate of thermal energy is injected into the system. We then simulated this experiment with the FIRAC code. This paper compares and discusses the gas-dynamic and heat transport data obtained from the ventilation system experiments with those predicted by the FIRAC code. The numerically predicted data generally are within 10% of the experimental data

  1. Predictions of the thermomechanical code ''RESTA'' compared with fuel element examinations after irradiation in the BR3 reactor

    International Nuclear Information System (INIS)

    Petitgrand, S.

    1980-01-01

    A large number of fuel rods have been irradiated in the small power plant BR3. Many of them have been examined in hot cells after irradiation, giving thus valuable experimental information. On the other hand a thermomechanical code, named RESTA, has been developed by the C.E.A. to describe and predict the behaviour of a fuel pin in a PWR environment and in stationary conditions. The models used in that code derive chiefly from the C.E.A.'s own experience and are briefly reviewed in this paper. The comparison between prediction and experience has been performed for four power history classes: (1) moderate (average linear rating approximately equal to 20 kw m -1 ) and short (approximately equal to 300 days) rating, (2) moderate (approximately equal to 20 kw m -1 ) and long (approximately equal to 600 days) rating, (3) high (25-30 kw m -1 ) and long (approximately equal to 600 days) rating and (4) very high (30-40 kw m -1 ) and long (approximately equal to 600 days) rating. Satisfactory agreement has been found between experimental and calculated results in all cases, concerning fuel structural change, fission gas release, pellet-clad interaction as well as clad permanent strain. (author)

  2. Metode Linear Predictive Coding (LPC Pada klasifikasi Hidden Markov Model (HMM Untuk Kata Arabic pada penutur Indonesia

    Directory of Open Access Journals (Sweden)

    Ririn Kusumawati

    2016-05-01

    In the classification, using Hidden Markov Model, voice signal is analyzed and searched the maximum possible value that can be recognized. The modeling results obtained parameters are used to compare with the sound of Arabic speakers. From the test results' Classification, Hidden Markov Models with Linear Predictive Coding extraction average accuracy of 78.6% for test data sampling frequency of 8,000 Hz, 80.2% for test data sampling frequency of 22050 Hz, 79% for frequencies sampling test data at 44100 Hz.

  3. Comparing Fine-Grained Source Code Changes And Code Churn For Bug Prediction

    NARCIS (Netherlands)

    Giger, E.; Pinzger, M.; Gall, H.C.

    2011-01-01

    A significant amount of research effort has been dedicated to learning prediction models that allow project managers to efficiently allocate resources to those parts of a software system that most likely are bug-prone and therefore critical. Prominent measures for building bug prediction models are

  4. Helios: History and Anatomy of a Successful In-House Enterprise High-Throughput Screening and Profiling Data Analysis System.

    Science.gov (United States)

    Gubler, Hanspeter; Clare, Nicholas; Galafassi, Laurent; Geissler, Uwe; Girod, Michel; Herr, Guy

    2018-06-01

    We describe the main characteristics of the Novartis Helios data analysis software system (Novartis, Basel, Switzerland) for plate-based screening and profiling assays, which was designed and built about 11 years ago. It has been in productive use for more than 10 years and is one of the important standard software applications running for a large user community at all Novartis Institutes for BioMedical Research sites globally. A high degree of automation is reached by embedding the data analysis capabilities into a software ecosystem that deals with the management of samples, plates, and result data files, including automated data loading. The application provides a series of analytical procedures, ranging from very simple to advanced, which can easily be assembled by users in very flexible ways. This also includes the automatic derivation of a large set of quality control (QC) characteristics at every step. Any of the raw, intermediate, and final results and QC-relevant quantities can be easily explored through linked visualizations. Links to global assay metadata management, data warehouses, and an electronic lab notebook system are in place. Automated transfer of relevant data to data warehouses and electronic lab notebook systems are also implemented.

  5. Predicting Motivation: Computational Models of PFC Can Explain Neural Coding of Motivation and Effort-based Decision-making in Health and Disease.

    Science.gov (United States)

    Vassena, Eliana; Deraeve, James; Alexander, William H

    2017-10-01

    Human behavior is strongly driven by the pursuit of rewards. In daily life, however, benefits mostly come at a cost, often requiring that effort be exerted to obtain potential benefits. Medial PFC (MPFC) and dorsolateral PFC (DLPFC) are frequently implicated in the expectation of effortful control, showing increased activity as a function of predicted task difficulty. Such activity partially overlaps with expectation of reward and has been observed both during decision-making and during task preparation. Recently, novel computational frameworks have been developed to explain activity in these regions during cognitive control, based on the principle of prediction and prediction error (predicted response-outcome [PRO] model [Alexander, W. H., & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience, 14, 1338-1344, 2011], hierarchical error representation [HER] model [Alexander, W. H., & Brown, J. W. Hierarchical error representation: A computational model of anterior cingulate and dorsolateral prefrontal cortex. Neural Computation, 27, 2354-2410, 2015]). Despite the broad explanatory power of these models, it is not clear whether they can also accommodate effects related to the expectation of effort observed in MPFC and DLPFC. Here, we propose a translation of these computational frameworks to the domain of effort-based behavior. First, we discuss how the PRO model, based on prediction error, can explain effort-related activity in MPFC, by reframing effort-based behavior in a predictive context. We propose that MPFC activity reflects monitoring of motivationally relevant variables (such as effort and reward), by coding expectations and discrepancies from such expectations. Moreover, we derive behavioral and neural model-based predictions for healthy controls and clinical populations with impairments of motivation. Second, we illustrate the possible translation to effort-based behavior of the HER model, an extended version of PRO

  6. Predictive Coding and Multisensory Integration: An Attentional Account of the Multisensory Mind

    Directory of Open Access Journals (Sweden)

    Durk eTalsma

    2015-03-01

    Full Text Available Multisensory integration involves a host of different cognitive processes, occurring at different stages of sensory processing. Here I argue that, despite recent insights suggesting that multisensory interactions can occur at very early latencies, the actual integration of individual sensory traces into an internally consistent mental representation is dependent on both top-down and bottom-up processes. Moreover, I argue that this integration is not limited to just sensory inputs, but that internal cognitive processes also shape the resulting mental representation. Studies showing that memory recall is affected by the initial multisensory context in which the stimuli were presented will be discussed, as well as several studies showing that mental imagery can affect multisensory illusions. This empirical evidence will be discussed from a predictive coding perspective, in which a central top-down attentional process is proposed to play a central role in coordinating the integration of all these inputs into a coherent mental representation.

  7. Field-based tests of geochemical modeling codes: New Zealand hydrothermal systems

    International Nuclear Information System (INIS)

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1993-12-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions

  8. Predictive coding of dynamical variables in balanced spiking networks.

    Science.gov (United States)

    Boerlin, Martin; Machens, Christian K; Denève, Sophie

    2013-01-01

    Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated.

  9. Benchmarking and qualification of the ppercase nufreq -ppercase npw code for best estimate prediction of multichannel core stability margins

    International Nuclear Information System (INIS)

    Taleyarkhan, R.P.; McFarlane, A.F.; Lahey, R.T. Jr.; Podowski, M.Z.

    1994-01-01

    The ppercase nufreq - ppercase np (G.C. Park et al. NUREG/CR-3375, 1983; S.J. Peng et al. NUREG/CR-4116, 1984; S.J. Peng et al. Nucl. Sci. Eng. 88 (1988) 404-411) code was modified and set up at Westinghouse, USA, for mixed fuel type multichannel core-wide stability analysis. The resulting code, ppercase nufreq - ppercase npw , allows for variable axial power profiles between channel groups and can handle mixed fuel types.Various models incorporated into ppercase nurfreq - ppercase npw were systematically compared against the Westinghouse channel stability analysis code ppercase mazda -ppercase nf (R. Taleyarkhan et al. J. Heat Transfer 107 (February 1985) 175-181; NUREG/CR2972, 1983), for which the mathematical model was developed in an entirely different manner. Excellent agreement was obtained which verified the thermal-hydraulic modeling and coding aspects. Detailed comparisons were also performed against nuclear-coupled reactor core stability data. All 13 Peach Bottom-2 EOC-2/3 low flow stability tests (L.A. Carmichael and R.O. Neimi, EPRI NP-564, Project 1020-1, 1978; F.B. Woffinden and R.O. Neimi, EPRI, NP 0972, Project 1020-2, 1981) were simulated. A key aspect for code qualification involved the development of a physically based empirical algorithm to correct for the effect of core inlet flow development on subcooled boiling. Various other modeling assumptions were tested and sensitivity studies performed. Good agreement was obtained between ppercase nufreq-npw predictions and data. ((orig.))

  10. Behaviors of impurity in ITER and DEMOs using BALDUR integrated predictive modeling code

    International Nuclear Information System (INIS)

    Onjun, Thawatchai; Buangam, Wannapa; Wisitsorasak, Apiwat

    2015-01-01

    The behaviors of impurity are investigated using self-consistent modeling of 1.5D BALDUR integrated predictive modeling code, in which theory-based models are used for both core and edge region. In these simulations, a combination of NCLASS neoclassical transport and Multi-mode (MMM95) anomalous transport model is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a theory-based pedestal model. This pedestal temperature model is based on a combination of magnetic and flow shear stabilization pedestal width scaling and an infinite-n ballooning pressure gradient model. The time evolution of plasma current, temperature and density profiles is carried out for ITER and DEMOs plasmas. As a result, the impurity behaviors such as impurity accumulation and impurity transport can be investigated. (author)

  11. Two-dimensional steady-state thermal and hydraulic analysis code for prediction of detailed temperature fields around distorted fuel pin in LMFBR assembly: SPOTBOW

    International Nuclear Information System (INIS)

    Shimizu, T.

    1983-01-01

    SPOTBOW computer program has been developed for predicting detailed temperature and turbulent flow velocity fields around distorted fuel pins in LMFBR fuel assemblies, in which pin to pin and pin to wrapper tube contacts may occur. The present study started from the requirement of reactor core designers to evaluate local hot spot temperature due to the wire contact effect and the pin bowing effect on cladding temperature distribution. This code calculates for both unbaffled and wire-wrapped pin bundles. The Galerkin method and iterative procedure were used to solve the basic equations which govern the local heat and momentum transfer in turbulent fluid flow around the distorted pins. Comparisons have been made with cladding temperatures measured in normal and distorted pin bundle mockups to check the validity of this code. Predicted peak temperatures in the vicinity of wire contact point were somewhat higher than the measured values, and the shape of the peaks agreed well with measurement. The changes of cladding temperature due to the decrease of gap width between bowing pin and adjacent pin were predicted well

  12. Improved lossless intra coding for H.264/MPEG-4 AVC.

    Science.gov (United States)

    Lee, Yung-Lyul; Han, Ki-Hun; Sullivan, Gary J

    2006-09-01

    A new lossless intra coding method based on sample-by-sample differential pulse code modulation (DPCM) is presented as an enhancement of the H.264/MPEG-4 AVC standard. The H.264/AVC design includes a multidirectional spatial prediction method to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded. In the new lossless intra coding method, the spatial prediction is performed based on samplewise DPCM instead of in the block-based manner used in the current H.264/AVC standard, while the block structure is retained for the residual difference entropy coding process. We show that the new method, based on samplewise DPCM, does not have a major complexity penalty, despite its apparent pipeline dependencies. Experiments show that the new lossless intra coding method reduces the bit rate by approximately 12% in comparison with the lossless intra coding method previously included in the H.264/AVC standard. As a result, the new method is currently being adopted into the H.264/AVC standard in a new enhancement project.

  13. GAPCON-THERMAL-3 code description

    International Nuclear Information System (INIS)

    Lanning, D.D.; Mohr, C.L.; Panisko, F.E.; Stewart, K.B.

    1978-01-01

    GAPCON-3 is a computer program that predicts the thermal and mechanical behavior of an operating fuel rod during its normal lifetime. The code calculates temperatures, dimensions, stresses, and strains for the fuel and the cladding in both the radial and axial directions for each step of the user specified power history. The method of weighted residuals is for the steady state temperature calculation, and is combined with a finite difference approximation of the time derivative for transient conditions. The stress strain analysis employs an iterative axisymmetric finite element procedure that includes plasticity and creep for normal and pellet-clad mechanical interaction loads. GAPCON-3 can solve steady state and operational transient problems. Comparisons of GAPCON-3 predictions to both closed form analytical solutions and actual inpile instrumented fuel rod data have demonstrated the ability of the code to calculate fuel rod behavior. GAPCON-3 features a restart capability and an associated plot package unavailable in previous GAPCON series codes

  14. GAPCON-THERMAL-3 code description

    Energy Technology Data Exchange (ETDEWEB)

    Lanning, D.D.; Mohr, C.L.; Panisko, F.E.; Stewart, K.B.

    1978-01-01

    GAPCON-3 is a computer program that predicts the thermal and mechanical behavior of an operating fuel rod during its normal lifetime. The code calculates temperatures, dimensions, stresses, and strains for the fuel and the cladding in both the radial and axial directions for each step of the user specified power history. The method of weighted residuals is for the steady state temperature calculation, and is combined with a finite difference approximation of the time derivative for transient conditions. The stress strain analysis employs an iterative axisymmetric finite element procedure that includes plasticity and creep for normal and pellet-clad mechanical interaction loads. GAPCON-3 can solve steady state and operational transient problems. Comparisons of GAPCON-3 predictions to both closed form analytical solutions and actual inpile instrumented fuel rod data have demonstrated the ability of the code to calculate fuel rod behavior. GAPCON-3 features a restart capability and an associated plot package unavailable in previous GAPCON series codes.

  15. Deformable mirror study. Final report, 21 July 1980-15 May 1981

    International Nuclear Information System (INIS)

    Budgor, A.B.

    1981-03-01

    The beam quality of a baseline system similar to the Helios system at Los Alamos Scientific Laboratory was analyzed with a two-dimensional beam train code based on a Fresnel propagator. The other components of the code include: (a) characterization of phase aberrations either in terms of Zernike polynomials synthesized directly from optical component interferograms when available, or by constructing a random wave front with specified statistics; (b) non-diffractive linear amplification via the Frantz-Nodvik equations; and (c) correction of accumulated phase aberration with continuous deformable mirrors whose surface is modeled by bicubic splines through the actuator points. The technical contents of this report will be presented in 4 sections. Section II will describe the physical optics of beam train propagation. A heuristic physical argument defining the zeroth order efficacy of adaptive optics to correct phase aberration is then derived. The results of applying the diffraction computer code to one beam line of the Helios laser system are described. The wave length scalability of deformable mirrors and efficacy of deformable mirror adaptive optics to correct phase aberration at UV wave lengths are then described

  16. Methods and Model Development for Coupled RELAP5/PARCS Analysis of the Atucha-II Nuclear Power Plant

    International Nuclear Information System (INIS)

    Ward, A.M.; Collins, B.S.; Xu, Y.; Downar, Th.J.; Madariaga, M.

    2011-01-01

    In order to analyze the steady state and transient behavior of CNA-II, several tasks were required. Methods and models were developed in several areas. HELIOS lattice models were developed and benchmarked against WIMS/MCNP5 results generated by NA-SA. Cross-sections for the coupled RELAP5/PARCS calculation were extracted from HELIOS within the Gen PMAXS framework. The validation of both HELIOS and PARCS was performed primarily by comparisons to WIMS/PUMA and MCNP for idealized models. Special methods were developed to model the control rods and boron injection systems of CNA-II. The insertion of the rods is oblique, and a special routine was added to PARCS to treat this effect. CFD results combined with specialized mapping routines were used to model the boron injection system. In all cases there was good agreement in the results which provided confidence in the neutronics methods and modeling. A coupled code benchmark between U of M and U of Pisa is ongoing and results are still preliminary. Under a LOCA transient, the best estimate behavior of the core appears to be acceptable

  17. Fuel behavior modeling using the MARS computer code

    International Nuclear Information System (INIS)

    Faya, S.C.S.; Faya, A.J.G.

    1983-01-01

    The fuel behaviour modeling code MARS against experimental data, was evaluated. Two cases were selected: an early comercial PWR rod (Maine Yankee rod) and an experimental rod from the Canadian BWR program (Canadian rod). The MARS predictions are compared with experimental data and predictions made by other fuel modeling codes. Improvements are suggested for some fuel behaviour models. Mars results are satisfactory based on the data available. (Author) [pt

  18. Near-fault earthquake ground motion prediction by a high-performance spectral element numerical code

    International Nuclear Information System (INIS)

    Paolucci, Roberto; Stupazzini, Marco

    2008-01-01

    Near-fault effects have been widely recognised to produce specific features of earthquake ground motion, that cannot be reliably predicted by 1D seismic wave propagation modelling, used as a standard in engineering applications. These features may have a relevant impact on the structural response, especially in the nonlinear range, that is hard to predict and to be put in a design format, due to the scarcity of significant earthquake records and of reliable numerical simulations. In this contribution a pilot study is presented for the evaluation of seismic ground-motions in the near-fault region, based on a high-performance numerical code for 3D seismic wave propagation analyses, including the seismic fault, the wave propagation path and the near-surface geological or topographical irregularity. For this purpose, the software package GeoELSE is adopted, based on the spectral element method. The set-up of the numerical benchmark of 3D ground motion simulation in the valley of Grenoble (French Alps) is chosen to study the effect of the complex interaction between basin geometry and radiation mechanism on the variability of earthquake ground motion

  19. Modeling peripheral olfactory coding in Drosophila larvae.

    Directory of Open Access Journals (Sweden)

    Derek J Hoare

    Full Text Available The Drosophila larva possesses just 21 unique and identifiable pairs of olfactory sensory neurons (OSNs, enabling investigation of the contribution of individual OSN classes to the peripheral olfactory code. We combined electrophysiological and computational modeling to explore the nature of the peripheral olfactory code in situ. We recorded firing responses of 19/21 OSNs to a panel of 19 odors. This was achieved by creating larvae expressing just one functioning class of odorant receptor, and hence OSN. Odor response profiles of each OSN class were highly specific and unique. However many OSN-odor pairs yielded variable responses, some of which were statistically indistinguishable from background activity. We used these electrophysiological data, incorporating both responses and spontaneous firing activity, to develop a bayesian decoding model of olfactory processing. The model was able to accurately predict odor identity from raw OSN responses; prediction accuracy ranged from 12%-77% (mean for all odors 45.2% but was always significantly above chance (5.6%. However, there was no correlation between prediction accuracy for a given odor and the strength of responses of wild-type larvae to the same odor in a behavioral assay. We also used the model to predict the ability of the code to discriminate between pairs of odors. Some of these predictions were supported in a behavioral discrimination (masking assay but others were not. We conclude that our model of the peripheral code represents basic features of odor detection and discrimination, yielding insights into the information available to higher processing structures in the brain.

  20. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    International Nuclear Information System (INIS)

    Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E.; Tills, J.

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions

  1. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    Energy Technology Data Exchange (ETDEWEB)

    Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E. [Sandia National Labs., Albuquerque, NM (United States); Tills, J. [J. Tills and Associates, Inc., Sandia Park, NM (United States)

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.

  2. SWAAM-LT: The long-term, sodium/water reaction analysis method computer code

    International Nuclear Information System (INIS)

    Shin, Y.W.; Chung, H.H.; Wiedermann, A.H.; Tanabe, H.

    1993-01-01

    The SWAAM-LT Code, developed for analysis of long-term effects of sodium/water reactions, is discussed. The theoretical formulation of the code is described, including the introduction of system matrices for ease of computer programming as a general system code. Also, some typical results of the code predictions for available large scale tests are presented. Test data for the steam generator design with the cover-gas feature and without the cover-gas feature are available and analyzed. The capabilities and limitations of the code are then discussed in light of the comparison between the code prediction and the test data

  3. Development of safety analysis codes for light water reactor

    International Nuclear Information System (INIS)

    Akimoto, Masayuki

    1985-01-01

    An overview is presented of currently used major codes for the prediction of thermohydraulic transients in nuclear power plants. The overview centers on the two-phase fluid dynamics of the coolant system and the assessment of the codes. Some of two-phase phenomena such as phase separation are not still predicted with engineering accuracy. MINCS-PIPE are briefly introduced. The MINCS-PIPE code is to assess constitutive relations and to aid development of various experimental correlations for 1V1T model to 2V2T model. (author)

  4. Impact on DNB predictions of mixing models implemented into the three-dimensional thermal-hydraulic code Thyc; Impact de modeles de melange implantes dans le code de thermohydraulique Thyc sur les predictions de flux critique

    Energy Technology Data Exchange (ETDEWEB)

    Banner, D

    1993-10-01

    The objective of this paper is to point out how departure from nucleate boiling (DNB) predictions can be improved by the THYC software. The EPRI/Columbia University E161 data base has been used for this study. In a first step, three thermal-hydraulic mixing models have been implemented into the code in order to obtain more accurate calculations of local void fractions at the DNB location. The three investigated models (A, B and C) are presented by growing complexity. Model A assumes a constant turbulent viscosity throughout the flow. In model B, a k-L turbulence transport equation has been implemented to model generation and decay of turbulence in the DNB test section. Model C is obtained by representing oriented transverse flows due to mixing vanes in addition to the k-L equation. A parametric study carried out with the three mixing models exhibits the most significant parameters. The occurrence of departure from nucleate boiling is then predicted by using a DNB correlation. Similar results are obtained as long as the DNB correlation is kept unchanged. In a second step, an attempt to substitute correlations by another statistical approach (pseudo-cubic thin-plate type Spline method) has been done. It is then shown that standard deviations of P/M (predicted to measured) ratios can be greatly improved by advanced statistics. (author). 7 figs., 2 tabs., 9 refs.

  5. Technical and tactical training team «Helios» Kharkiv in the first round of 23 Ukrainian football championship in the premier league 2013–2014

    Directory of Open Access Journals (Sweden)

    Rebaz Sleman

    2014-10-01

    Full Text Available Purpose: to define the characteristics of the model command of technical and tactical training team participating in the Ukrainian Premier League First League. Material and Methods: the research was conducted using the method of peer review. The experts were involved 5 specialists football. Results: the mean values for the analyzed variables in 10 games. The various technical and tactical actions and their percentage in the overall structure of the game team statistics for 20 games, as well as some indicators of team play "Helios" Kharkov. Conclusions: the obtained quantitative and qualitative indicators (coefficient of marriage as a team on the technical and tactical actions, as well as separately for each technical and tactical reception. The performances allow you to make adjustments to the training process this command to improve sportsmanship.

  6. Comparison of secondary flows predicted by a viscous code and an inviscid code with experimental data for a turning duct

    Science.gov (United States)

    Schwab, J. R.; Povinelli, L. A.

    1984-01-01

    A comparison of the secondary flows computed by the viscous Kreskovsky-Briley-McDonald code and the inviscid Denton code with benchmark experimental data for turning duct is presented. The viscous code is a fully parabolized space-marching Navier-Stokes solver while the inviscid code is a time-marching Euler solver. The experimental data were collected by Taylor, Whitelaw, and Yianneskis with a laser Doppler velocimeter system in a 90 deg turning duct of square cross-section. The agreement between the viscous and inviscid computations was generally very good for the streamwise primary velocity and the radial secondary velocity, except at the walls, where slip conditions were specified for the inviscid code. The agreement between both the computations and the experimental data was not as close, especially at the 60.0 deg and 77.5 deg angular positions within the duct. This disagreement was attributed to incomplete modelling of the vortex development near the suction surface.

  7. Assessment of void fraction prediction using the RETRAN-3d and CORETRAN-01/VIPRE-02 codes

    International Nuclear Information System (INIS)

    Aounallah, Y.; Coddington, P.; Gantner, U.

    2000-01-01

    A review of wide-range void fraction correlations against an extensive database has been undertaken to identify the correlations best suited for nuclear safety applications. Only those based on the drift-flux model have been considered. The survey confirmed the application range of the Chexal-Lellouche correlation, and the database was also used to obtain new parameters for the Inoue drift-flux correlation, which was also found suitable. A void fraction validation study has also been undertaken for the codes RETRAN-3D and CORETRAN-01/VIPRE-02 at the assembly and sub-assembly levels. The study showed the impact of the RETRAN-03 user options on the predicted void fraction, and the RETRAN-3D limitation at very low fluid velocity. At the sub-assembly level, CORETRAN-01/VIPRE-02 substantially underestimates the void in regions with low power-to-flow ratios. Otherwise, a generally good predictive performance was obtained with both RETRAN-3D and CORETRAN-01/VIPRE-02. (authors)

  8. Assessment of void fraction prediction using the RETRAN-3d and CORETRAN-01/VIPRE-02 codes

    Energy Technology Data Exchange (ETDEWEB)

    Aounallah, Y.; Coddington, P.; Gantner, U

    2000-07-01

    A review of wide-range void fraction correlations against an extensive database has been undertaken to identify the correlations best suited for nuclear safety applications. Only those based on the drift-flux model have been considered. The survey confirmed the application range of the Chexal-Lellouche correlation, and the database was also used to obtain new parameters for the Inoue drift-flux correlation, which was also found suitable. A void fraction validation study has also been undertaken for the codes RETRAN-3D and CORETRAN-01/VIPRE-02 at the assembly and sub-assembly levels. The study showed the impact of the RETRAN-03 user options on the predicted void fraction, and the RETRAN-3D limitation at very low fluid velocity. At the sub-assembly level, CORETRAN-01/VIPRE-02 substantially underestimates the void in regions with low power-to-flow ratios. Otherwise, a generally good predictive performance was obtained with both RETRAN-3D and CORETRAN-01/VIPRE-02. (authors)

  9. Verification of reactor safety codes

    International Nuclear Information System (INIS)

    Murley, T.E.

    1978-01-01

    The safety evaluation of nuclear power plants requires the investigation of wide range of potential accidents that could be postulated to occur. Many of these accidents deal with phenomena that are outside the range of normal engineering experience. Because of the expense and difficulty of full scale tests covering the complete range of accident conditions, it is necessary to rely on complex computer codes to assess these accidents. The central role that computer codes play in safety analyses requires that the codes be verified, or tested, by comparing the code predictions with a wide range of experimental data chosen to span the physical phenomena expected under potential accident conditions. This paper discusses the plans of the Nuclear Regulatory Commission for verifying the reactor safety codes being developed by NRC to assess the safety of light water reactors and fast breeder reactors. (author)

  10. Tristan code and its application

    Science.gov (United States)

    Nishikawa, K.-I.

    Since TRISTAN: The 3-D Electromagnetic Particle Code was introduced in 1990, it has been used for many applications including the simulations of global solar windmagnetosphere interaction. The most essential ingridients of this code have been published in the ISSS-4 book. In this abstract we describe some of issues and an application of this code for the study of global solar wind-magnetosphere interaction including a substorm study. The basic code (tristan.f) for the global simulation and a local simulation of reconnection with a Harris model (issrec2.f) are available at http:/www.physics.rutger.edu/˜kenichi. For beginners the code (isssrc2.f) with simpler boundary conditions is suitable to start to run simulations. The future of global particle simulations for a global geospace general circulation (GGCM) model with predictive capability (for Space Weather Program) is discussed.

  11. Field-based tests of geochemical modeling codes usign New Zealand hydrothermal systems

    International Nuclear Information System (INIS)

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1994-06-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions

  12. Assessment of one dimensional reflood model in REFLA/TRAC code

    International Nuclear Information System (INIS)

    Akimoto, Hajime; Ohnuki, Akira; Murao, Yoshio

    1993-12-01

    Post-test calculations for twelve selected SSRTF, SCTF and CCTF tests were performed to assess the predictive capability of the one-dimensional reflood model in the REFLA/TRAC code for core thermal behavior during the reflood in a PWR LOCA. Both core void fraction profile and clad temperature transients were predicted excellently by the REFLA/TRAC code including parameter effect of core inlet subcooling, core flooding rate, core configuration, core power, system pressure, initial clad temperature and so on. The peak clad temperature was predicted within an error of 50 K. Based on these assessment results, it is verified that the core thermal hydraulic behaviors during the reflood can be predicted excellently with the REFLA/TRAC code under various conditions where the reflood may occur in a PWR LOCA. (author)

  13. A predictive transport modeling code for ICRF-heated tokamaks

    International Nuclear Information System (INIS)

    Phillips, C.K.; Hwang, D.Q.

    1992-02-01

    In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3. Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5

  14. Analytical validation of the CACECO containment analysis code

    International Nuclear Information System (INIS)

    Peak, R.D.

    1979-08-01

    The CACECO containment analysis code was developed to predict the thermodynamic responses of LMFBR containment facilities to a variety of accidents. This report covers the verification of the CACECO code by problems that can be solved by hand calculations or by reference to textbook and literature examples. The verification concentrates on the accuracy of the material and energy balances maintained by the code and on the independence of the four cells analyzed by the code so that the user can be assured that the code analyses are numerically correct and independent of the organization of the input data submitted to the code

  15. Serpent: an alternative for the nuclear fuel cells analysis of a BWR

    International Nuclear Information System (INIS)

    Silva A, L.; Del Valle G, E.; Gomez T, A. M.

    2013-10-01

    In the last ten years the diverse research groups in nuclear engineering of the Universidad Nacional Autonoma de Mexico and Instituto Politecnico Nacional (UNAM, IPN), as of research (Instituto Nacional de Investigaciones Nucleares, ININ) as well as the personnel of the Nuclear Plant Management of the Comision Federal de Electricidad have been using the codes Helios and /or CASMO-4 in the generation of cross sections (X S) of nuclear fuel cells of the cores corresponding to the Units 1 and 2 of the nuclear power plant of Laguna Verde. Both codes belong to the Studsvik-Scandpower Company who receives the payment for the use and their respective maintenance. In recent years, the code Serpent appears among the nuclear community distributed by the OECD/Nea which does not has cost neither in its use neither in its maintenance. The code is based on the Monte Carlo method and makes use of the processing in parallel. In the Escuela Superior de Fisica y Matematicas of the IPN, the personnel has accumulated certain experience in the use of Serpent under the direction of personal of the ININ; of this experience have been obtained for diverse fuel burned, the infinite multiplication factor for three cells of nuclear fuel, without control bar and with control bar for a known thermodynamic state fixed by: a) the fuel temperature (T f ), b) the moderator temperature (T m ) and c) the vacuums fraction (α). Although was not realized any comparison with the X S that the codes Helios and CASMO-4 generate, the results obtained for the infinite multiplication factor show the prospective tendencies with regard to the fuel burned so much in the case in that is not present the control bar like when it is. The results are encouraging and motivate to the study group to continue with the X S generation of a core in order to build the respective library of nuclear data as a following step and this can be used for the codes PARCS, of USA NRC, DYN3D of HZDR, or others developed locally in the

  16. Serpent: an alternative for the nuclear fuel cells analysis of a BWR; SERPENT: una alternativa para el analisis de celdas de combustible nuclear de un BWR

    Energy Technology Data Exchange (ETDEWEB)

    Silva A, L.; Del Valle G, E. [IPN, Escuela Superior de Fisica y Matematicas, Av. Instituto Politecnico Nacional s/n, U.P. Adolfo Lopez Mateos, Edificio 9, Col. San Pedro Zacatenco, 07738 Mexico D. F. (Mexico); Gomez T, A. M., E-mail: lidi.s.albarran@gmail.com [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2013-10-15

    In the last ten years the diverse research groups in nuclear engineering of the Universidad Nacional Autonoma de Mexico and Instituto Politecnico Nacional (UNAM, IPN), as of research (Instituto Nacional de Investigaciones Nucleares, ININ) as well as the personnel of the Nuclear Plant Management of the Comision Federal de Electricidad have been using the codes Helios and /or CASMO-4 in the generation of cross sections (X S) of nuclear fuel cells of the cores corresponding to the Units 1 and 2 of the nuclear power plant of Laguna Verde. Both codes belong to the Studsvik-Scandpower Company who receives the payment for the use and their respective maintenance. In recent years, the code Serpent appears among the nuclear community distributed by the OECD/Nea which does not has cost neither in its use neither in its maintenance. The code is based on the Monte Carlo method and makes use of the processing in parallel. In the Escuela Superior de Fisica y Matematicas of the IPN, the personnel has accumulated certain experience in the use of Serpent under the direction of personal of the ININ; of this experience have been obtained for diverse fuel burned, the infinite multiplication factor for three cells of nuclear fuel, without control bar and with control bar for a known thermodynamic state fixed by: a) the fuel temperature (T{sub f}), b) the moderator temperature (T{sub m}) and c) the vacuums fraction (α). Although was not realized any comparison with the X S that the codes Helios and CASMO-4 generate, the results obtained for the infinite multiplication factor show the prospective tendencies with regard to the fuel burned so much in the case in that is not present the control bar like when it is. The results are encouraging and motivate to the study group to continue with the X S generation of a core in order to build the respective library of nuclear data as a following step and this can be used for the codes PARCS, of USA NRC, DYN3D of HZDR, or others developed locally

  17. Development of a code MOGRA for predicting the migration of ground additions and its application to various land utilization areas

    International Nuclear Information System (INIS)

    Amano, Hikaru; Uchida, Shigeo; Matsuoka, Syungo; Ikeda, Hiroshi; Hayashi, Hiroko; Kurosawa, Naohiro

    2003-01-01

    A Code MOGRA (Migration Of GRound Additions) is a migration prediction code for toxic ground additions including radioactive materials in a terrestrial environment, which consists of computational codes that are applicable to various evaluation target systems, and can be used on personal computers. The computational code has the dynamic compartment analysis block at its core, the graphical user interface (GUI) for model formation, computation parameter settings, and results displays. The compartments are obtained by classifying various natural environments into groups that exhibit similar properties. The functionality of MOGRA is being verified by applying it in the analyses of the migration rates of radioactive substances from the atmosphere to soils and plants and flow rates into the rivers. In this report, a hypothetical combination of land usage was supposed to check the function of MOGRA. The land usage was consisted from cultivated lands, forests, uncultivated lands, urban area, river, and lake. Each land usage has its own inside model which is basic module. Also supposed was homogeneous contamination of the surface land from atmospheric deposition of 137 Cs(1.0Bq/m 2 ). The system analyzed the dynamic changes of 137 Cs concentrations in each compartment, fluxes from one compartment to another compartment. (author)

  18. Quantitative software-reliability analysis of computer codes relevant to nuclear safety

    International Nuclear Information System (INIS)

    Mueller, C.J.

    1981-12-01

    This report presents the results of the first year of an ongoing research program to determine the probability of failure characteristics of computer codes relevant to nuclear safety. An introduction to both qualitative and quantitative aspects of nuclear software is given. A mathematical framework is presented which will enable the a priori prediction of the probability of failure characteristics of a code given the proper specification of its properties. The framework consists of four parts: (1) a classification system for software errors and code failures; (2) probabilistic modeling for selected reliability characteristics; (3) multivariate regression analyses to establish predictive relationships among reliability characteristics and generic code property and development parameters; and (4) the associated information base. Preliminary data of the type needed to support the modeling and the predictions of this program are described. Illustrations of the use of the modeling are given but the results so obtained, as well as all results of code failure probabilities presented herein, are based on data which at this point are preliminary, incomplete, and possibly non-representative of codes relevant to nuclear safety

  19. Does the Holland Code Predict Job Satisfaction and Productivity in Clothing Factory Workers?

    Science.gov (United States)

    Heesacker, Martin; And Others

    1988-01-01

    Administered Self-Directed Search to sewing machine operators to determine Holland code, and assessed work productivity, job satisfaction, absenteeism, and insurance claims. Most workers were of the Social code. Social subjects were the most satisfied, Conventional and Realistic subjects next, and subjects of other codes less so. Productivity of…

  20. A parametric study of MELCOR Accident Consequence Code System 2 (MACCS2) Input Values for the Predicted Health Effect

    International Nuclear Information System (INIS)

    Kim, So Ra; Min, Byung Il; Park, Ki Hyun; Yang, Byung Mo; Suh, Kyung Suk

    2016-01-01

    The MELCOR Accident Consequence Code System 2, MACCS2, has been the most widely used through the world among the off-site consequence analysis codes. MACCS2 code is used to estimate the radionuclide concentrations, radiological doses, health effects, and economic consequences that could result from the hypothetical nuclear accidents. Most of the MACCS model parameter values are defined by the user and those input parameters can make a significant impact on the output. A limited parametric study was performed to identify the relative importance of the values of each input parameters in determining the predicted early and latent health effects in MACCS2. These results would not be applicable to every case of the nuclear accidents, because only the limited calculation was performed with Kori-specific data. The endpoints of the assessment were early- and latent cancer-risk in the exposed population, therefore it might produce the different results with the parametric studies for other endpoints, such as contamination level, absorbed dose, and economic cost. Accident consequence assessment is important for decision making to minimize the health effect from radiation exposure, accordingly the sufficient parametric studies are required for the various endpoints and input parameters in further research

  1. Optimization of angular setup with proposed β–η chart for solar energy apparatus through Helios orbital analysis

    International Nuclear Information System (INIS)

    Ou, Chung-Jen

    2014-01-01

    Highlights: • Analysis procedure for the angular setup of the SPVS based on Helios orbital analysis is shown. • Average of 61.87% efficiency for the whole year with this fixed-type SPVS is obtained. • Graphical interpretation on proposed β–η chart for estimating SPVS efficiency is presented. - Abstract: Many reports and patents address tracking-type systems to complete and optimized the acquisition of solar energy. However, fixed-type systems contribute the most of the market to the solar photovoltaic system (SPVS) for the existing buildings. Based on this fact, it is important to discuss and understand the feasible setup parameters to achieve the optimum radiation-collection-efficiency of a fixed-type system. The purpose of the present report is to provide the methodology and to investigate the optimum angles of the system facilities through minimum mathematics. The calculations show that for the demonstrated locations, the maximum efficiency for the whole year of a fixed-type system can reach more than 61% efficiency, compared to a total tracking system at the same location and time intervals. Based on these calculations, the appropriate optical system can be designed to improve system performance for the majority buildings requirements in any location

  2. Simulation of the KAERI PASCAL Test with MARS-KS and TRACE Codes

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kyung Won; Cheong, Aeju; Shin, Andong; Cho, Min Ki [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2016-10-15

    In order to validate the operational performance of the PAFS, KAERI has performed the experimental investigation using the PASCAL (PAFS Condensing heat removal Assessment Loop) facility. In this study, we simulated the KAERI PASCAL SS-540-P1 test with MARS-KS V1.4 and TRACE V5.0 p4 codes to assess the code predictability for the condensation heat transfer inside the passive auxiliary feedwater system. We simulated the KAERI PASCAL SS-540-P1 test with MARS-KS V1.4 and TRACE V5.0 p4 codes to assess the code predictability for the condensation heat transfer inside the passive auxiliary feedwater system. The calculated results of heat flux, inner wall surface temperature of the condensing tube, fluid temperature, and steam mass flow rate are compared with the experimental data. The result shows that the MARS-KS generally under-predict the heat fluxes. The TRACE over-predicts the heat flux at tube inlet region and under-predicts it at tube outlet region. The TRACE prediction shows larger amount of steam condensation by about 3% than the MARS-KS prediction.

  3. Numerical analysis of reflood simulation based on a mechanistic, best-estimate, approach by KREWET code

    International Nuclear Information System (INIS)

    Chun, Moon-Hyun; Jeong, Eun-Soo

    1983-01-01

    A new computer code entitled KREWET has been developed in an effort to improve the accuracy and applicability of the existing reflood heat transfer simulation computer code. Sample calculations for temperature histories and heat transfer coefficient are made using KREWET code and the results are compared with the predictions of REFLUX, QUEN1D, and the PWR-FLECHT data for various conditions. These show favourable agreement in terms of clad temperature versus time. For high flooding rates (5-15cm/sec) and high pressure (∼413 Kpa), reflood predictions are reasonably well predicted by KREWET code as well as with other codes. For low flooding rates (less than ∼4cm/sec) and low pressure (∼138Kpa), predictions show considerable error in evaluating the rewet position versus time. This observation is common to all the codes examined in the present work

  4. Numerical analysis for reflood simulation based on a mechanistic, best-estimate, approach by KREWET code

    International Nuclear Information System (INIS)

    Chun, M.-H.; Jeong, E.-S.

    1983-01-01

    A new computer code entitled KREWET has been developed in an effort to improve the accuracy and applicability of the existing reflood heat transfer simulation computer code. Sample calculations for temperature histories and heat transfer coefficient are made using KREWET code and the results are compared with the predictions of REFLUX, QUENID, and the PWR-FLECHT data for various conditions. These show favorable agreement in terms of clad temperature versus time. For high flooding rates (5-15cm/sec) and high pressure (approx. =413 Kpa), reflood predictions are reasonably well predicted by KREWET code as well as with other codes. For low flooding rates (less than approx. =4cm/sec) and low pressure (approx. =138 Kpa), predictions show considerable error in evaluating the rewet position versus time. This observation is common to all the codes examined in the present work

  5. Tokamak plasma power balance calculation code (TPC code) outline and operation manual

    International Nuclear Information System (INIS)

    Fujieda, Hirobumi; Murakami, Yoshiki; Sugihara, Masayoshi.

    1992-11-01

    This report is a detailed description on the TPC code, that calculates the power balance of a tokamak plasma according to the ITER guidelines. The TPC code works on a personal computer (Macintosh or J-3100/ IBM-PC). Using input data such as the plasma shape, toroidal magnetic field, plasma current, electron temperature, electron density, impurities and heating power, TPC code can determine the operation point of the fusion reactor (Ion temperature is assumed to be equal to the electron temperature). Supplied flux (Volt · sec) and burn time are also estimated by coil design parameters. Calculated energy confinement time is compared with various L-mode scaling laws and the confinement enhancement factor (H-factor) is evaluated. Divertor heat load is predicted by using simple scaling models (constant-χ, Bohm-type-χ and JT-60U empirical scaling models). Frequently used data can be stored in a 'device file' and used as the default values. TPC code can generate 2-D mesh data and the POPCON plot is drawn by a contour line plotting program (CONPLT). The operation manual about CONPLT code is also described. (author)

  6. Positive predictive value of peptic ulcer diagnosis codes in the Danish National Patient Registry.

    Science.gov (United States)

    Viborg, Søren; Søgaard, Kirstine Kobberøe; Jepsen, Peter

    2017-01-01

    Diagnoses of peptic ulcer are registered in the Danish National Patient Registry (DNPR) for administrative as well as research purposes, but it is unknown whether the coding validity depends on the location of the ulcer. To validate the International Classification of Diseases, 10 th revision diagnosis codes of peptic ulcer in the DNPR by estimating positive predictive values (PPVs) for gastric and duodenal ulcer diagnoses. We identified all patients registered with a hospital discharge diagnosis of peptic ulcer from Aarhus University Hospital, Denmark, in 1995-2006. Among them, we randomly selected 200 who had an outpatient gastroscopy at the time of ulcer diagnosis. We reviewed the findings from these gastroscopies to confirm the presence of peptic ulcer and its location. We calculated PPVs and corresponding 95% confidence intervals (CIs) of gastric and duodenal ulcer diagnoses, using descriptions from the gastroscopic examinations as standard reference. In total, 182 records (91%) were available for review. The overall PPV of peptic ulcer diagnoses in DNPR was 95.6% (95% CI 91.5-98.1), with PPVs of 90.3% (95% CI 82.4-95.5) for gastric ulcer diagnoses, and 94.4% (95% CI 87.4-98.2) for duodenal ulcer diagnoses. PPVs were constant over time. The PPV of uncomplicated peptic ulcer diagnoses in the DNPR is high, and the location of the ulcers is registered correctly in most cases, indicating that the diagnoses are useful for research purposes.

  7. Sensitivity analysis of the RESRAD, a dose assessment code

    International Nuclear Information System (INIS)

    Yu, C.; Cheng, J.J.; Zielen, A.J.

    1991-01-01

    The RESRAD code is a pathway analysis code that is designed to calculate radiation doses and derive soil cleanup criteria for the US Department of Energy's environmental restoration and waste management program. the RESRAD code uses various pathway and consumption-rate parameters such as soil properties and food ingestion rates in performing such calculations and derivations. As with any predictive model, the accuracy of the predictions depends on the accuracy of the input parameters. This paper summarizes the results of a sensitivity analysis of RESRAD input parameters. Three methods were used to perform the sensitivity analysis: (1) Gradient Enhanced Software System (GRESS) sensitivity analysis software package developed at oak Ridge National Laboratory; (2) direct perturbation of input parameters; and (3) built-in graphic package that shows parameter sensitivities while the RESRAD code is operational

  8. Fast neutron analysis code SAD1

    International Nuclear Information System (INIS)

    Jung, M.; Ott, C.

    1985-01-01

    A listing and an example of outputs of the M.C. code SAD1 are given here. This code has been used many times to predict responses of fast neutrons in hydrogenic materials (in our case emulsions or plastics) towards the elastic n, p scattering. It can be easily extended to other kinds of such materials and to any kind of incident fast neutron spectrum

  9. Annotating Diseases Using Human Phenotype Ontology Improves Prediction of Disease-Associated Long Non-coding RNAs.

    Science.gov (United States)

    Le, Duc-Hau; Dao, Lan T M

    2018-05-23

    Recently, many long non-coding RNAs (lncRNAs) have been identified and their biological function has been characterized; however, our understanding of their underlying molecular mechanisms related to disease is still limited. To overcome the limitation in experimentally identifying disease-lncRNA associations, computational methods have been proposed as a powerful tool to predict such associations. These methods are usually based on the similarities between diseases or lncRNAs since it was reported that similar diseases are associated with functionally similar lncRNAs. Therefore, prediction performance is highly dependent on how well the similarities can be captured. Previous studies have calculated the similarity between two diseases by mapping exactly each disease to a single Disease Ontology (DO) term, and then use a semantic similarity measure to calculate the similarity between them. However, the problem of this approach is that a disease can be described by more than one DO terms. Until now, there is no annotation database of DO terms for diseases except for genes. In contrast, Human Phenotype Ontology (HPO) is designed to fully annotate human disease phenotypes. Therefore, in this study, we constructed disease similarity networks/matrices using HPO instead of DO. Then, we used these networks/matrices as inputs of two representative machine learning-based and network-based ranking algorithms, that is, regularized least square and heterogeneous graph-based inference, respectively. The results showed that the prediction performance of the two algorithms on HPO-based is better than that on DO-based networks/matrices. In addition, our method can predict 11 novel cancer-associated lncRNAs, which are supported by literature evidence. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Evaluation of Yonggwang unit 4 cycle 5 using SPNOVA code

    International Nuclear Information System (INIS)

    Choi, Y. S.; Cha, K. H.; Lee, E. K.; Park, M. K.

    2004-01-01

    Core follow calculation of Yonggwang (YGN) unit 4 cycle 5 is performed to evaluate SPNOVA code if it can be applicable or not to Korean standard nuclear power plant (KSNP). SPNOVA code consists of BEPREPN and ANC code to represent incore detector and neutronics model, respectively. SPNOVA core deflection model is compared and verified with ANC depletion results in terms of critical boron concentration (CBC), peaking factor (Fq) and radial power distribution. In YGN4, SPNOVA predicts 30 ppm lower than that of ROCS predicting CBC. Fq and radial power distribution behavior of SPNOVA calculation have conservatively higher than those of ROCS predicting values. And also SPNOVA predicting results are compared with measurement data from snapshot and CECOR core calculation. It is reasonable to accept SPNOVA to analyze KSNP. The model of SPNOVA for KSNP will be used to develop the brand-new incore detector of platinum and vanadium

  11. Predicting CYP2C19 Catalytic Parameters for Enantioselective Oxidations Using Artificial Neural Networks and a Chirality Code

    Science.gov (United States)

    Hartman, Jessica H.; Cothren, Steven D.; Park, Sun-Ha; Yun, Chul-Ho; Darsey, Jerry A.; Miller, Grover P.

    2013-01-01

    Cytochromes P450 (CYP for isoforms) play a central role in biological processes especially metabolism of chiral molecules; thus, development of computational methods to predict parameters for chiral reactions is important for advancing this field. In this study, we identified the most optimal artificial neural networks using conformation-independent chirality codes to predict CYP2C19 catalytic parameters for enantioselective reactions. Optimization of the neural networks required identifying the most suitable representation of structure among a diverse array of training substrates, normalizing distribution of the corresponding catalytic parameters (kcat, Km, and kcat/Km), and determining the best topology for networks to make predictions. Among different structural descriptors, the use of partial atomic charges according to the CHelpG scheme and inclusion of hydrogens yielded the most optimal artificial neural networks. Their training also required resolution of poorly distributed output catalytic parameters using a Box-Cox transformation. End point leave-one-out cross correlations of the best neural networks revealed that predictions for individual catalytic parameters (kcat and Km) were more consistent with experimental values than those for catalytic efficiency (kcat/Km). Lastly, neural networks predicted correctly enantioselectivity and comparable catalytic parameters measured in this study for previously uncharacterized CYP2C19 substrates, R- and S-propranolol. Taken together, these seminal computational studies for CYP2C19 are the first to predict all catalytic parameters for enantioselective reactions using artificial neural networks and thus provide a foundation for expanding the prediction of cytochrome P450 reactions to chiral drugs, pollutants, and other biologically active compounds. PMID:23673224

  12. Visual communication with retinex coding.

    Science.gov (United States)

    Huck, F O; Fales, C L; Davis, R E; Alter-Gartenberg, R

    2000-04-10

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  13. Visual Communication with Retinex Coding

    Science.gov (United States)

    Huck, Friedrich O.; Fales, Carl L.; Davis, Richard E.; Alter-Gartenberg, Rachel

    2000-04-01

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  14. Fast H.264/AVC FRExt intra coding using belief propagation.

    Science.gov (United States)

    Milani, Simone

    2011-01-01

    In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.

  15. The analysis of thermal-hydraulic models in MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, M H; Hur, C; Kim, D K; Cho, H J [POhang Univ., of Science and TECHnology, Pohang (Korea, Republic of)

    1996-07-15

    The objective of the present work is to verify the prediction and analysis capability of MELCOR code about the progression of severe accidents in light water reactor and also to evaluate appropriateness of thermal-hydraulic models used in MELCOR code. Comparing the results of experiment and calculation with MELCOR code is carried out to achieve the above objective. Specially, the comparison between the CORA-13 experiment and the MELCOR code calculation was performed.

  16. El láser de helio-neon en la regeneración del nervio ciático seccionado y suturado

    Directory of Open Access Journals (Sweden)

    Pedro Rodríguez Sotelo

    Full Text Available Para valorar el efecto del láser de helio-neón (He-Ne sobre la regeneración del nervio ciático, seccionado y suturado, se utilizaron un total de 72 ratas Wistar, que se dividieron en 3 grupos, los cuales fueron irradiados con He-Ne, luz de xenón y luz solar respectivamente. Al grupo testigo sólo se les realizó la sección y sutura del nervio ciático. Lasratas a las que se les irradió el nervio con luz de xenón y luz solar evolucionaron igual que el grupo testigo, es decir, se presentaron en su totalidad, lesiones tróficas de mayor o menor grado, así como lesiones motrices. En el grupo en el que se utilizó He-Ne, sólo se observó en un animal la pérdida de la una de un dedo y no se apreciaron trastornos motrices. La irradiación con láser de He-Ne tiene una importante influencia en la restauración de las lesiones nerviosas en la rata

  17. The intercomparison of aerosol codes

    International Nuclear Information System (INIS)

    Dunbar, I.H.; Fermandjian, J.; Gauvain, J.

    1988-01-01

    The behavior of aerosols in a reactor containment vessel following a severe accident could be an important determinant of the accident source term to the environment. Various processes result in the deposition of the aerosol onto surfaces within the containment, from where they are much less likely to be released. Some of these processes are very sensitive to particle size, so it is important to model the aerosol growth processes: agglomeration and condensation. A number of computer codes have been written to model growth and deposition processes. They have been tested against each other in a series of code comparison exercises. These exercises have investigated sensitivities to physical and numerical assumptions and have also proved a useful means of quality control for the codes. Various exercises in which code predictions are compared with experimental results are now under way

  18. Comparison of PWR-IMF and FR fuel cycles

    International Nuclear Information System (INIS)

    Darilek, Petr; Zajac, Radoslav; Breza, Juraj; Necas, Vladimir

    2007-01-01

    The paper gives a comparison of PWR (Russia origin VVER-440) cycle with improved micro-heterogeneous inert matrix fuel assemblies and FR cycle. Micro-heterogeneous combined assembly contains transmutation pins with Pu and MAs from burned uranium reprocessing and standard uranium pins. Cycle analyses were performed by HELIOS spectral code and SCALE code system. Comparison is based on fuel cycle indicators, used in the project RED-IMPACT - part of EU FP6. Advantages of both closed cycles are pointed out. (authors)

  19. Survey Of Lossless Image Coding Techniques

    Science.gov (United States)

    Melnychuck, Paul W.; Rabbani, Majid

    1989-04-01

    Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.

  20. Users' guide to CACECO containment analysis code. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Peak, R.D.

    1979-06-01

    The CACECO containment analysis code was developed to predict the thermodynamic responses of LMFBR containment facilities to a variety of accidents. The code is included in the National Energy Software Center Library at Argonne National Laboratory as Program No. 762. This users' guide describes the CACECO code and its data input requirements. The code description covers the many mathematical models used and the approximations used in their solution. The descriptions are detailed to the extent that the user can modify the code to suit his unique needs, and, indeed, the reader is urged to consider code modification acceptable.

  1. ISODEP, A Fuel Depletion Analysis Code for Predicting Isotopic ...

    African Journals Online (AJOL)

    The trend of results was found to be consistent with those obtained by analytical and other numerical methods. Discovery and Innovation Vol. 13 no. 3/4 December (2001) pp. 184-195. KEY WORDS: depletion analysis, code, research reactor, simultaneous equations, decay of nuclides, radionuclitides, isotope. Résumé

  2. Validation uncertainty of MATRA code for subchannel void distributions

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Dae-Hyun; Kim, S. J.; Kwon, H.; Seo, K. W. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    To extend code capability to the whole core subchannel analysis, pre-conditioned Krylov matrix solvers such as BiCGSTAB and GMRES are implemented in MATRA code as well as parallel computing algorithms using MPI and OPENMP. It is coded by fortran 90, and has some user friendly features such as graphic user interface. MATRA code was approved by Korean regulation body for design calculation of integral-type PWR named SMART. The major role subchannel code is to evaluate core thermal margin through the hot channel analysis and uncertainty evaluation for CHF predictions. In addition, it is potentially used for the best estimation of core thermal hydraulic field by incorporating into multiphysics and/or multi-scale code systems. In this study we examined a validation process for the subchannel code MATRA specifically in the prediction of subchannel void distributions. The primary objective of validation is to estimate a range within which the simulation modeling error lies. The experimental data for subchannel void distributions at steady state and transient conditions was provided on the framework of OECD/NEA UAM benchmark program. The validation uncertainty of MATRA code was evaluated for a specific experimental condition by comparing the simulation result and experimental data. A validation process should be preceded by code and solution verification. However, quantification of verification uncertainty was not addressed in this study. The validation uncertainty of the MATRA code for predicting subchannel void distribution was evaluated for a single data point of void fraction measurement at a 5x5 PWR test bundle on the framework of OECD UAM benchmark program. The validation standard uncertainties were evaluated as 4.2%, 3.9%, and 2.8% with the Monte-Carlo approach at the axial levels of 2216 mm, 2669 mm, and 3177 mm, respectively. The sensitivity coefficient approach revealed similar results of uncertainties but did not account for the nonlinear effects on the

  3. Radiation transport phenomena and modeling - part A: Codes

    International Nuclear Information System (INIS)

    Lorence, L.J.

    1997-01-01

    The need to understand how particle radiation (high-energy photons and electrons) from a variety of sources affects materials and electronics has motivated the development of sophisticated computer codes that describe how radiation with energies from 1.0 keV to 100.0 GeV propagates through matter. Predicting radiation transport is the necessary first step in predicting radiation effects. The radiation transport codes that are described here are general-purpose codes capable of analyzing a variety of radiation environments including those produced by nuclear weapons (x-rays, gamma rays, and neutrons), by sources in space (electrons and ions) and by accelerators (x-rays, gamma rays, and electrons). Applications of these codes include the study of radiation effects on electronics, nuclear medicine (imaging and cancer treatment), and industrial processes (food disinfestation, waste sterilization, manufacturing.) The primary focus will be on coupled electron-photon transport codes, with some brief discussion of proton transport. These codes model a radiation cascade in which electrons produce photons and vice versa. This coupling between particles of different types is important for radiation effects. For instance, in an x-ray environment, electrons are produced that drive the response in electronics. In an electron environment, dose due to bremsstrahlung photons can be significant once the source electrons have been stopped

  4. Methods and Model Development for Coupled RELAP5/PARCS Analysis of the Atucha-II Nuclear Power Plant

    Directory of Open Access Journals (Sweden)

    Andrew M. Ward

    2011-01-01

    Full Text Available In order to analyze the steady state and transient behavior of CNA-II, several tasks were required. Methods and models were developed in several areas. HELIOS lattice models were developed and benchmarked against WIMS/MCNP5 results generated by NA-SA. Cross-sections for the coupled RELAP5/PARCS calculation were extracted from HELIOS within the GenPMAXS framework. The validation of both HELIOS and PARCS was performed primarily by comparisons to WIMS/PUMA and MCNP for idealized models. Special methods were developed to model the control rods and boron injection systems of CNA-II. The insertion of the rods is oblique, and a special routine was added to PARCS to treat this effect. CFD results combined with specialized mapping routines were used to model the boron injection system. In all cases there was good agreement in the results which provided confidence in the neutronics methods and modeling. A coupled code benchmark between U of M and U of Pisa is ongoing and results are still preliminary. Under a LOCA transient, the best estimate behavior of the core appears to be acceptable.

  5. PopCORN: Hunting down the differences between binary population synthesis codes

    Science.gov (United States)

    Toonen, S.; Claeys, J. S. W.; Mennekens, N.; Ruiter, A. J.

    2014-02-01

    Context. Binary population synthesis (BPS) modelling is a very effective tool to study the evolution and properties of various types of close binary systems. The uncertainty in the parameters of the model and their effect on a population can be tested in a statistical way, which then leads to a deeper understanding of the underlying (sometimes poorly understood) physical processes involved. Several BPS codes exist that have been developed with different philosophies and aims. Although BPS has been very successful for studies of many populations of binary stars, in the particular case of the study of the progenitors of supernovae Type Ia, the predicted rates and ZAMS progenitors vary substantially between different BPS codes. Aims: To understand the predictive power of BPS codes, we study the similarities and differences in the predictions of four different BPS codes for low- and intermediate-mass binaries. We investigate the differences in the characteristics of the predicted populations, and whether they are caused by different assumptions made in the BPS codes or by numerical effects, e.g. a lack of accuracy in BPS codes. Methods: We compare a large number of evolutionary sequences for binary stars, starting with the same initial conditions following the evolution until the first (and when applicable, the second) white dwarf (WD) is formed. To simplify the complex problem of comparing BPS codes that are based on many (often different) assumptions, we equalise the assumptions as much as possible to examine the inherent differences of the four BPS codes. Results: We find that the simulated populations are similar between the codes. Regarding the population of binaries with one WD, there is very good agreement between the physical characteristics, the evolutionary channels that lead to the birth of these systems, and their birthrates. Regarding the double WD population, there is a good agreement on which evolutionary channels exist to create double WDs and a rough

  6. Prediction Capability of SPACE Code about the Loop Seal Clearing on ATLAS SBLOCA

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Sung Won; Lee, Jong Hyuk; Chung, Bub Dong; Kim, Kyung Doo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The most possible break size for loop seal reforming has been decided as 4 inch by the pre-calculation conducted by the RELAP5 and MARS. Many organizations have participated with various system analysis codes: for examples, RELAP5, MARS, TRACE. KAERI also anticipated with SPACE code. SPACE code has been developed for the use of design and safety analysis of nuclear thermal hydraulics system. KHNP and other organizations have collaborated during last 10 years. And it is currently under the certification procedures. SPACE has the capability to analyze the droplet field with full governing equation set: continuity, momentum, and energy. The SPACE code has been participated in PKL- 3 benchmark program for the international activity. The DSP-04 benchmark problem is also the application of SPACE as the domestic activities. The cold leg top slot break accident of APR1400 reactor has been modeled and surveyed by SPACE code. Benchmark experiment as a program of DSP-04 has been performed with ATLAS facility. The break size has been selected as 4 inch in APR1400 and the corresponding scale down break size has been modeled in SPACE code. The loop seal reforming has been occurred at all 4 loops. But the PCT shows no significant behaviors.

  7. MVP utilization for PWR design code

    International Nuclear Information System (INIS)

    Matsumoto, Hideki; Tahara, Yoshihisa

    2001-01-01

    MHI studies the method of the spatially dependent resonance cross sections so as to predict the power distribution in a fuel pellet accurately. For this purpose, the multiband method and the Stoker/Weiss method were implemented to the 2 dimensional transport code PHOENIX-P, and the methods were validated by comparing them with MVP code. Although the appropriate reference was not obtain from the deterministic codes on the resonance cross section study, now the Monte Carlo code MVP result is available and useful as reference. It is shown here how MVP is used to develop the multiband method and the Stoker/Weiss method, and how effective the result of MVP is on the study of the resonance cross sections. (author)

  8. FILM-30: A Heat Transfer Properties Code for Water Coolant

    International Nuclear Information System (INIS)

    MARSHALL, THERON D.

    2001-01-01

    A FORTRAN computer code has been written to calculate the heat transfer properties at the wetted perimeter of a coolant channel when provided the bulk water conditions. This computer code is titled FILM-30 and the code calculates its heat transfer properties by using the following correlations: (1) Sieder-Tate: forced convection, (2) Bergles-Rohsenow: onset to nucleate boiling, (3) Bergles-Rohsenow: partially developed nucleate boiling, (4) Araki: fully developed nucleate boiling, (5) Tong-75: critical heat flux (CHF), and (6) Marshall-98: transition boiling. FILM-30 produces output files that provide the heat flux and heat transfer coefficient at the wetted perimeter as a function of temperature. To validate FILM-30, the calculated heat transfer properties were used in finite element analyses to predict internal temperatures for a water-cooled copper mockup under one-sided heating from a rastered electron beam. These predicted temperatures were compared with the measured temperatures from the author's 1994 and 1998 heat transfer experiments. There was excellent agreement between the predicted and experimentally measured temperatures, which confirmed the accuracy of FILM-30 within the experimental range of the tests. FILM-30 can accurately predict the CHF and transition boiling regimes, which is an important advantage over current heat transfer codes. Consequently, FILM-30 is ideal for predicting heat transfer properties for applications that feature high heat fluxes produced by one-sided heating

  9. Impact on DNB predictions of mixing models implemented into the three-dimensional thermal-hydraulic code Thyc

    International Nuclear Information System (INIS)

    Banner, D.

    1993-10-01

    The objective of this paper is to point out how departure from nucleate boiling (DNB) predictions can be improved by the THYC software. The EPRI/Columbia University E161 data base has been used for this study. In a first step, three thermal-hydraulic mixing models have been implemented into the code in order to obtain more accurate calculations of local void fractions at the DNB location. The three investigated models (A, B and C) are presented by growing complexity. Model A assumes a constant turbulent viscosity throughout the flow. In model B, a k-L turbulence transport equation has been implemented to model generation and decay of turbulence in the DNB test section. Model C is obtained by representing oriented transverse flows due to mixing vanes in addition to the k-L equation. A parametric study carried out with the three mixing models exhibits the most significant parameters. The occurrence of departure from nucleate boiling is then predicted by using a DNB correlation. Similar results are obtained as long as the DNB correlation is kept unchanged. In a second step, an attempt to substitute correlations by another statistical approach (pseudo-cubic thin-plate type Spline method) has been done. It is then shown that standard deviations of P/M (predicted to measured) ratios can be greatly improved by advanced statistics. (author). 7 figs., 2 tabs., 9 refs

  10. Reaction path of energetic materials using THOR code

    Science.gov (United States)

    Durães, L.; Campos, J.; Portugal, A.

    1998-07-01

    The method of predicting reaction path, using THOR code, allows for isobar and isochor adiabatic combustion and CJ detonation regimes, the calculation of the composition and thermodynamic properties of reaction products of energetic materials. THOR code assumes the thermodynamic equilibria of all possible products, for the minimum Gibbs free energy, using HL EoS. The code allows the possibility of estimating various sets of reaction products, obtained successively by the decomposition of the original reacting compound, as a function of the released energy. Two case studies of thermal decomposition procedure were selected, calculated and discussed—pure Ammonium Nitrate and its based explosive ANFO, and Nitromethane—because their equivalence ratio is respectively lower, near and greater than the stoicheiometry. Predictions of reaction path are in good correlation with experimental values, proving the validity of proposed method.

  11. CBP Phase I Code Integration

    International Nuclear Information System (INIS)

    Smith, F.; Brown, K.; Flach, G.; Sarkar, S.

    2011-01-01

    The goal of the Cementitious Barriers Partnership (CBP) is to develop a reasonable and credible set of software tools to predict the structural, hydraulic, and chemical performance of cement barriers used in nuclear applications over extended time frames (greater than 100 years for operating facilities and greater than 1000 years for waste management). The simulation tools will be used to evaluate and predict the behavior of cementitious barriers used in near surface engineered waste disposal systems including waste forms, containment structures, entombments, and environmental remediation. These cementitious materials are exposed to dynamic environmental conditions that cause changes in material properties via (i) aging, (ii) chloride attack, (iii) sulfate attack, (iv) carbonation, (v) oxidation, and (vi) primary constituent leaching. A set of state-of-the-art software tools has been selected as a starting point to capture these important aging and degradation phenomena. Integration of existing software developed by the CBP partner organizations was determined to be the quickest method of meeting the CBP goal of providing a computational tool that improves the prediction of the long-term behavior of cementitious materials. These partner codes were selected based on their maturity and ability to address the problems outlined above. The GoldSim Monte Carlo simulation program (GTG 2010a, GTG 2010b) was chosen as the code integration platform (Brown and Flach 2009b). GoldSim (current Version 10.5) is a Windows based graphical object-oriented computer program that provides a flexible environment for model development (Brown and Flach 2009b). The linking of GoldSim to external codes has previously been successfully demonstrated (Eary 2007, Mattie et al. 2007). GoldSim is capable of performing deterministic and probabilistic simulations and of modeling radioactive decay and constituent transport. As part of the CBP project, a general Dynamic Link Library (DLL) interface

  12. CBP PHASE I CODE INTEGRATION

    Energy Technology Data Exchange (ETDEWEB)

    Smith, F.; Brown, K.; Flach, G.; Sarkar, S.

    2011-09-30

    The goal of the Cementitious Barriers Partnership (CBP) is to develop a reasonable and credible set of software tools to predict the structural, hydraulic, and chemical performance of cement barriers used in nuclear applications over extended time frames (greater than 100 years for operating facilities and greater than 1000 years for waste management). The simulation tools will be used to evaluate and predict the behavior of cementitious barriers used in near surface engineered waste disposal systems including waste forms, containment structures, entombments, and environmental remediation. These cementitious materials are exposed to dynamic environmental conditions that cause changes in material properties via (i) aging, (ii) chloride attack, (iii) sulfate attack, (iv) carbonation, (v) oxidation, and (vi) primary constituent leaching. A set of state-of-the-art software tools has been selected as a starting point to capture these important aging and degradation phenomena. Integration of existing software developed by the CBP partner organizations was determined to be the quickest method of meeting the CBP goal of providing a computational tool that improves the prediction of the long-term behavior of cementitious materials. These partner codes were selected based on their maturity and ability to address the problems outlined above. The GoldSim Monte Carlo simulation program (GTG 2010a, GTG 2010b) was chosen as the code integration platform (Brown & Flach 2009b). GoldSim (current Version 10.5) is a Windows based graphical object-oriented computer program that provides a flexible environment for model development (Brown & Flach 2009b). The linking of GoldSim to external codes has previously been successfully demonstrated (Eary 2007, Mattie et al. 2007). GoldSim is capable of performing deterministic and probabilistic simulations and of modeling radioactive decay and constituent transport. As part of the CBP project, a general Dynamic Link Library (DLL) interface was

  13. Technical Findings, Lessons Learned, and Recommendations Resulting from the Helios Prototype Vehicle Mishap

    Science.gov (United States)

    Noll, Thomas E.; Ishmael, Stephen D.; Henwood, Bart; Perez-Davis, Marla E.; Tiffany, Geary C.; Madura, John; Gaier, Matthew; Brown, John M.; Wierzbanowski, Ted

    2007-01-01

    The Helios Prototype was originally planned to be two separate vehicles, but because of resource limitations only one vehicle was developed to demonstrate two missions. The vehicle consisted of two configurations, one for each mission. One configuration, designated HP01, was designed to operate at extremely high altitudes using batteries and high-efficiency solar cells spread across the upper surface of its 247-foot wingspan. On August 13, 2001, the HP01 configuration reached an altitude of 96,863 feet, a world record for sustained horizontal flight by a winged aircraft. The other configuration, designated HP03, was designed for long-duration flight. The plan was to use the solar cells to power the vehicle's electric motors and subsystems during the day and to use a modified commercial hydrogen-air fuel cell system for use during the night. The aircraft design used wing dihedral, engine power, elevator control surfaces, and a stability augmentation and control system to provide aerodynamic stability and control. At about 30 minutes into the second flight of HP03, the aircraft encountered a disturbance in the way of turbulence and morphed into an unexpected, persistent, high dihedral configuration. As a result of the persistent high dihedral, the aircraft became unstable in a very divergent pitch mode in which the airspeed excursions from the nominal flight speed about doubled every cycle of the oscillation. The aircraft s design airspeed was subsequently exceeded and the resulting high dynamic pressures caused the wing leading edge secondary structure on the outer wing panels to fail and the solar cells and skin on the upper surface of the wing to rip away. As a result, the vehicle lost its ability to maintain lift, fell into the Pacific Ocean within the confines of the U.S. Navy's Pacific Missile Range Facility, and was destroyed. This paper describes the mishap and its causes, and presents the technical recommendations and lessons learned for improving the design

  14. Predictions of bubbly flows in vertical pipes using two-fluid models in CFDS-FLOW3D code

    International Nuclear Information System (INIS)

    Banas, A.O.; Carver, M.B.; Unrau, D.

    1995-01-01

    This paper reports the results of a preliminary study exploring the performance of two sets of two-fluid closure relationships applied to the simulation of turbulent air-water bubbly upflows through vertical pipes. Predictions obtained with the default CFDS-FLOW3D model for dispersed flows were compared with the predictions of a new model (based on the work of Lee), and with the experimental data of Liu. The new model, implemented in the CFDS-FLOW3D code, included additional source terms in the open-quotes standardclose quotes κ-ε transport equations for the liquid phase, as well as modified model coefficients and wall functions. All simulations were carried out in a 2-D axisymmetric format, collapsing the general multifluid framework of CFDS-FLOW3D to the two-fluid (air-water) case. The newly implemented model consistently improved predictions of radial-velocity profiles of both phases, but failed to accurately reproduce the experimental phase-distribution data. This shortcoming was traced to the neglect of anisotropic effects in the modelling of liquid-phase turbulence. In this sense, the present investigation should be considered as the first step toward the ultimate goal of developing a theoretically sound and universal CFD-type two-fluid model for bubbly flows in channels

  15. Predictions of bubbly flows in vertical pipes using two-fluid models in CFDS-FLOW3D code

    Energy Technology Data Exchange (ETDEWEB)

    Banas, A.O.; Carver, M.B. [Chalk River Laboratories (Canada); Unrau, D. [Univ. of Toronto (Canada)

    1995-09-01

    This paper reports the results of a preliminary study exploring the performance of two sets of two-fluid closure relationships applied to the simulation of turbulent air-water bubbly upflows through vertical pipes. Predictions obtained with the default CFDS-FLOW3D model for dispersed flows were compared with the predictions of a new model (based on the work of Lee), and with the experimental data of Liu. The new model, implemented in the CFDS-FLOW3D code, included additional source terms in the {open_quotes}standard{close_quotes} {kappa}-{epsilon} transport equations for the liquid phase, as well as modified model coefficients and wall functions. All simulations were carried out in a 2-D axisymmetric format, collapsing the general multifluid framework of CFDS-FLOW3D to the two-fluid (air-water) case. The newly implemented model consistently improved predictions of radial-velocity profiles of both phases, but failed to accurately reproduce the experimental phase-distribution data. This shortcoming was traced to the neglect of anisotropic effects in the modelling of liquid-phase turbulence. In this sense, the present investigation should be considered as the first step toward the ultimate goal of developing a theoretically sound and universal CFD-type two-fluid model for bubbly flows in channels.

  16. ARC Code TI: Optimal Alarm System Design and Implementation

    Data.gov (United States)

    National Aeronautics and Space Administration — An optimal alarm system can robustly predict a level-crossing event that is specified over a fixed prediction horizon. The code contained in this packages provides...

  17. Development of a tracer transport option for the NAPSAC fracture network computer code

    International Nuclear Information System (INIS)

    Herbert, A.W.

    1990-06-01

    The Napsac computer code predicts groundwater flow through fractured rock using a direct fracture network approach. This paper describes the development of a tracer transport algorithm for the NAPSAC code. A very efficient particle-following approach is used enabling tracer transport to be predicted through large fracture networks. The new algorithm is tested against three test examples. These demonstrations confirm the accuracy of the code for simple networks, where there is an analytical solution to the transport problem, and illustrates the use of the computer code on a more realistic problem. (author)

  18. A CFD code comparison of wind turbine wakes

    DEFF Research Database (Denmark)

    Laan, van der, Paul Maarten; Storey, R. C.; Sørensen, Niels N.

    2014-01-01

    A comparison is made between the EllipSys3D and SnS CFD codes. Both codes are used to perform Large-Eddy Simulations (LES) of single wind turbine wakes, using the actuator disk method. The comparison shows that both LES models predict similar velocity deficits and stream-wise Reynolds-stresses fo...

  19. User effects on the transient system code calculations. Final report

    International Nuclear Information System (INIS)

    Aksan, S.N.; D'Auria, F.

    1995-01-01

    Large thermal-hydraulic system codes are widely used to perform safety and licensing analyses of nuclear power plants to optimize operational procedures and the plant design itself. Evaluation of the capabilities of these codes are accomplished by comparing the code predictions with the measured experimental data obtained from various types of separate effects and integral test facilities. In recent years, some attempts have been made to establish methodologies to evaluate the accuracy and the uncertainty of the code predictions and consequently judgement on the acceptability of the codes. In none of the methodologies has the influence of the code user on the calculated results been directly addressed. In this paper, the results of the investigations on the user effects for the thermal-hydraulic transient system codes is presented and discussed on the basis of some case studies. The general findings of the investigations show that in addition to user effects, there are other reasons that affect the results of the calculations and which are hidden under user effects. Both the hidden factors and the direct user effects are discussed in detail and general recommendations and conclusions are presented to control and limit them

  20. Qualification of ARROTTA code for LWR accident analysis

    International Nuclear Information System (INIS)

    Huang, P.-H.; Peng, K.Y.; Lin, W.-C.; Wu, J.-Y.

    2004-01-01

    This paper presents the qualification efforts performed by TPC and INER for the 3-D spatial kinetics code ARROTTA for LWR core transient analysis. TPC and INER started a joint 5 year project in 1989 to establish independent capabilities to perform reload design and transient analysis utilizing state-of-the-art computer programs. As part of the effort, the ARROTTA code was chosen to perform multi-dimensional kinetics calculations such as rod ejection for PWR and rod drop for BWR. To qualify ARROTTA for analysis of FSAR licensing basis core transients, ARROTTA has been benchmarked for the static core analysis against plant measured data and SIMULATE-3 predictions, and for the kinetic analysis against available benchmark problems. The static calculations compared include critical boron concentration, core power distribution, and control rod worth. The results indicated that ARROTTA predictions match very well with plant measured data and SIMULATE-3 predictions. The kinetic benchmark problems validated include NEACRP rod ejection problem, 3-D LMW LWR rod withdrawal/insertion problem, and 3-D LRA BWR transient benchmark problem. The results indicate that ARROTTA's accuracy and stability are excellent as compared to other space-time kinetics codes. It is therefore concluded that ARROTTA provides accurate predictions for multi-dimensional core transient for LWRs. (author)

  1. Application of containment codes to LMFBRs in the United States

    International Nuclear Information System (INIS)

    Chang, Y.W.

    1977-01-01

    This paper describes the application of containment codes to predict the response of the fast reactor containment and the primary piping loops to HCDAs. Five sample problems are given to illustrate their applications. The first problem deals with the response of the primary containment to an HCDA. The second problem deals with the coolant flow in the reactor lower plenum. The third problem concerns sodium spillage and slug impact. The fourth problem deals with the response of a piping loop. The fifth problem analyzes the response of a reactor head closure. Application of codes in parametric studies and comparison of code predictions with experiments are also discussed. (Auth.)

  2. Application of containment codes to LMFBRs in the United States

    International Nuclear Information System (INIS)

    Chang, Y.W.

    1977-01-01

    The application of containment codes to predict the response of the fast reactor containment and the primary piping loops to HCDAs is described. Five sample problems are given to illustrate their applications. The first problem deals with the response of the primary containment to an HCDA. The second problem deals with the coolant flow in the reactor lower plenum. The third proem concerns sodium spillage and slug impact. The fourth problem deals with the response of a piping loop. The fifth problem analyzes the response of a reactor head closure. Application of codes in parametric studies and comparison of code predictions with experiments are also discussed

  3. Benchmark Simulation for the Development of the Regulatory Audit Subchannel Analysis Code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, G. H.; Song, C.; Woo, S. W. [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2012-05-15

    For the safe and reliable operation of a reactor, it is important to predict accurately the flow and temperature distributions in the thermal-hydraulic design of a reactor core. A subchannel approach can give the reasonable flow and temperature distributions with the short computing time. Korea Institute of Nuclear Safety (KINS) is presently reviewing new subchannel code, THALES, which will substitute for both THINC-IV and TORC code. To assess the prediction performance of THALES, KINS is developing the subchannel analysis code for the independent audit calculation. The code is based on workstation version of COBRA-IV-I. The main objective of the present study is to assess the performance of COBRA-IV-I code by comparing the simulation results with experimental ones for the sample problems

  4. Truncation Depth Rule-of-Thumb for Convolutional Codes

    Science.gov (United States)

    Moision, Bruce

    2009-01-01

    In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.

  5. A Deformation Analysis Code of CANDU Fuel under the Postulated Accident: ELOCA

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan; Jung, Jong Yeob

    2006-11-15

    Deformations of the fuel element or fuel channel might be the main cause of the fuel failure. Therefore, the accurate prediction of the deformation and the analysis capabilities are closely related to the increase of the safety margin of the reactor. In this report, among the performance analysis or the transient behavior prediction computer codes, the analysis codes for deformation such as the ELOCA, HOTSPOT, CONTACT-1, and PTDFORM are briefly introduced and each code's objectives, applicability, and relations are explained. Especially, the user manual for ELOCA code which is the analysis code for the fuel deformation and the release of fission product during the transient period after the postulated accidents is provided so that it can be the guidance to the potential users of the code and save the time and economic loss by reducing the trial and err000.

  6. A Deformation Analysis Code of CANDU Fuel under the Postulated Accident: ELOCA

    International Nuclear Information System (INIS)

    Park, Joo Hwan; Jung, Jong Yeob

    2006-11-01

    Deformations of the fuel element or fuel channel might be the main cause of the fuel failure. Therefore, the accurate prediction of the deformation and the analysis capabilities are closely related to the increase of the safety margin of the reactor. In this report, among the performance analysis or the transient behavior prediction computer codes, the analysis codes for deformation such as the ELOCA, HOTSPOT, CONTACT-1, and PTDFORM are briefly introduced and each code's objectives, applicability, and relations are explained. Especially, the user manual for ELOCA code which is the analysis code for the fuel deformation and the release of fission product during the transient period after the postulated accidents is provided so that it can be the guidance to the potential users of the code and save the time and economic loss by reducing the trial and error

  7. A computer code for the prediction of mill gases and hot air distribution between burners sections as input parameters for 3D CFD furnace calculation

    International Nuclear Information System (INIS)

    Tucakovic, Dragan; Zivanovic, Titoslav; Beloshevic, Srdjan

    2006-01-01

    Current computer technology development enables application of powerful software packages that can provide a reliable insight into real operating conditions of a steam boiler in the Thermal Power Plant. Namely, an application of CFD code to the 3D analysis of combustion and heat transfer in a furnace provides temperature, velocity and concentration fields in both cross sectional and longitudinal planes of the observed furnace. In order to obtain reliable analytical results, which corresponds to real furnace conditions, it is necessary to accurately predict a distribution of mill gases and hot air between burners' sections, because these parameters are input values for the furnace 3D calculation. Regarding these tasks, the computer code for the prediction of mill gases and hot air distribution has been developed at the Department for steam boilers of the Faculty of Mechanical Engineering in Belgrade. The code is based on simultaneous calculations of material and heat balances for fan mill and air tracts. The aim of this paper is to present a methodology of performed calculations and results obtained for the steam boiler furnace of 350 MWe Thermal Power Plant equipped with eight fan mills. Key words: mill gases, hot air, aerodynamic calculation, air tract, mill tract.

  8. MARS-KS code validation activity through the atlas domestic standard problem

    International Nuclear Information System (INIS)

    Choi, K. Y.; Kim, Y. S.; Kang, K. H.; Park, H. S.; Cho, S.

    2012-01-01

    The 2 nd Domestic Standard Problem (DSP-02) exercise using the ATLAS integral effect test data was executed to transfer the integral effect test data to domestic nuclear industries and to contribute to improving the safety analysis methodology for PWRs. A small break loss of coolant accident of a 6-inch break at the cold leg was determined as a target scenario by considering its technical importance and by incorporating interests from participants. Ten calculation results using MARS-KS code were collected, major prediction results were described qualitatively and code prediction accuracy was assessed quantitatively using the FFTBM. In addition, special code assessment activities were carried out to find out the area where the model improvement is required in the MARS-KS code. The lessons from this DSP-02 and recommendations to code developers are described in this paper. (authors)

  9. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  10. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  11. User Effect on Code Application and Qualification Needs

    International Nuclear Information System (INIS)

    D'Auria, F.; Salah, A.B.

    2008-01-01

    Experience with some code assessment case studies and also additional ISPs have shown the dominant effect of the code user on the predicted system behavior. The general findings of the user effect investigations on some of the case studies indicate, specifically, that in addition to user effects, there are other reasons which affect the results of the calculations and are hidden under the general title of user effects. The specific characteristics of experimental facilities, i.e. limitations as far as code assessment is concerned; limitations of the used thermal-hydraulic codes to simulate certain system behavior or phenomena; limitations due to interpretation of experimental data by the code user, i.e. interpretation of experimental data base. On the basis of the discussions in this paper, the following conclusions and recommendations can be made: More dialogue appears to be necessary with the experimenters in the planning of code assessment calculations, e.g. ISPs.; User guidelines are not complete for the codes and the lack of sufficient and detailed user guidelines are observed with some of the case studies; More extensive user instruction and training, improved user guidelines, or quality assurance procedures may partially reduce some of the subjective user influence on the calculated results; The discrepancies between experimental data and code predictions are due both to the intrinsic code limit and to the so called user effects. There is a worthful need to quantify the percentage of disagreement due to the poor utilization of the code and due to the code itself. This need especially arises for the uncertainty evaluation studies (e.g. [18]) which do not take into account the mentioned user effects; A much focused investigation, based on the results of comparison calculations e.g. ISPs, analyzing the experimental data and the results of the specific code in order to evaluate the user effects and the related experimental aspects should be integral part of the

  12. Development of CAP code for nuclear power plant containment: Lumped model

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Soon Joon, E-mail: sjhong90@fnctech.com [FNC Tech. Co. Ltd., Heungdeok 1 ro 13, Giheung-gu, Yongin-si, Gyeonggi-do 446-908 (Korea, Republic of); Choo, Yeon Joon; Hwang, Su Hyun; Lee, Byung Chul [FNC Tech. Co. Ltd., Heungdeok 1 ro 13, Giheung-gu, Yongin-si, Gyeonggi-do 446-908 (Korea, Republic of); Ha, Sang Jun [Central Research Institute, Korea Hydro & Nuclear Power Company, Ltd., 70, 1312-gil, Yuseong-daero, Yuseong-gu, Daejeon 305-343 (Korea, Republic of)

    2015-09-15

    Highlights: • State-of-art containment analysis code, CAP, has been developed. • CAP uses 3-field equations, water level oriented upwind scheme, local head model. • CAP has a function of linked calculation with reactor coolant system code. • CAP code assessments showed appropriate prediction capabilities. - Abstract: CAP (nuclear Containment Analysis Package) code has been developed in Korean nuclear society for the analysis of nuclear containment thermal hydraulic behaviors including pressure and temperature trends and hydrogen concentration. Lumped model of CAP code uses 2-phase, 3-field equations for fluid behaviors, and has appropriate constitutive equations, 1-dimensional heat conductor model, component models, trip and control models, and special process models. CAP can run in a standalone mode or a linked mode with a reactor coolant system analysis code. The linked mode enables the more realistic calculation of a containment response and is expected to be applicable to a more complicated advanced plant design calculation. CAP code assessments were carried out by gradual approaches: conceptual problems, fundamental phenomena, component and principal phenomena, experimental validation, and finally comparison with other code calculations on the base of important phenomena identifications. The assessments showed appropriate prediction capabilities of CAP.

  13. Development of CAP code for nuclear power plant containment: Lumped model

    International Nuclear Information System (INIS)

    Hong, Soon Joon; Choo, Yeon Joon; Hwang, Su Hyun; Lee, Byung Chul; Ha, Sang Jun

    2015-01-01

    Highlights: • State-of-art containment analysis code, CAP, has been developed. • CAP uses 3-field equations, water level oriented upwind scheme, local head model. • CAP has a function of linked calculation with reactor coolant system code. • CAP code assessments showed appropriate prediction capabilities. - Abstract: CAP (nuclear Containment Analysis Package) code has been developed in Korean nuclear society for the analysis of nuclear containment thermal hydraulic behaviors including pressure and temperature trends and hydrogen concentration. Lumped model of CAP code uses 2-phase, 3-field equations for fluid behaviors, and has appropriate constitutive equations, 1-dimensional heat conductor model, component models, trip and control models, and special process models. CAP can run in a standalone mode or a linked mode with a reactor coolant system analysis code. The linked mode enables the more realistic calculation of a containment response and is expected to be applicable to a more complicated advanced plant design calculation. CAP code assessments were carried out by gradual approaches: conceptual problems, fundamental phenomena, component and principal phenomena, experimental validation, and finally comparison with other code calculations on the base of important phenomena identifications. The assessments showed appropriate prediction capabilities of CAP

  14. Development and Application of a Code for Internal Exposure (CINEX) based on the CINDY code

    International Nuclear Information System (INIS)

    Kravchik, T.; Duchan, N.; Sarah, R.; Gabay, Y.; Kol, R.

    2004-01-01

    Internal exposure to radioactive materials at the NRCN is evaluated using the CINDY (Code for Internal Dosimetry) Package. The code was developed by the Pacific Northwest Laboratory to assist the interpretation of bioassay data, provide bioassay projections and evaluate committed and calendar-year doses from intake or bioassay measurement data. It provides capabilities to calculate organ dose and effective dose equivalents using the International Commission on Radiological Protection (ICRP) 30 approach. The CINDY code operates under DOS operating system and consequently its operation needs a relatively long procedure which also includes a lot of manual typing that can lead to personal human mistakes. A new code has been developed at the NRCN, the CINEX (Code for Internal Exposure), which is an Excel application and leads to a significant reduction in calculation time (in the order of 5-10 times) and in the risk of personal human mistakes. The code uses a database containing tables which were constructed by the CINDY and contain the bioassay values predicted by the ICRP30 model after an intake of an activity unit of each isotope. Using the database, the code than calculates the appropriate intake and consequently the committed effective dose and organ dose. Calculations with the CINEX code were compared to similar calculations with the CINDY code. The discrepancies were less than 5%, which is the rounding error of the CINDY code. Attached is a table which compares parameters calculated with the CINEX and the CINDY codes (for a class Y uranium). The CINEX is now used at the NRCN to calculate occupational intakes and doses to workers with radioactive materials

  15. Practical Design of Delta-Sigma Multiple Description Audio Coding

    DEFF Research Database (Denmark)

    Leegaard, Jack Højholt; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    It was recently shown that delta-sigma quantization (DSQ) can be used for optimal multiple description (MD) coding of Gaussian sources. The DSQ scheme combined oversampling, prediction, and noise-shaping in order to trade off side distortion for central distortion in MD coding. It was shown that ...

  16. Dual Coding in Children.

    Science.gov (United States)

    Burton, John K.; Wildman, Terry M.

    The purpose of this study was to test the applicability of the dual coding hypothesis to children's recall performance. The hypothesis predicts that visual interference will have a small effect on the recall of visually presented words or pictures, but that acoustic interference will cause a decline in recall of visually presented words and…

  17. Validating the BISON fuel performance code to integral LWR experiments

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, R.L., E-mail: Richard.Williamson@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Gamble, K.A., E-mail: Kyle.Gamble@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Perez, D.M., E-mail: Danielle.Perez@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Novascone, S.R., E-mail: Stephen.Novascone@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Pastore, G., E-mail: Giovanni.Pastore@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Gardner, R.J., E-mail: Russell.Gardner@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Hales, J.D., E-mail: Jason.Hales@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Liu, W., E-mail: Wenfeng.Liu@anatech.com [ANATECH Corporation, 5435 Oberlin Dr., San Diego, CA 92121 (United States); Mai, A., E-mail: Anh.Mai@anatech.com [ANATECH Corporation, 5435 Oberlin Dr., San Diego, CA 92121 (United States)

    2016-05-15

    Highlights: • The BISON multidimensional fuel performance code is being validated to integral LWR experiments. • Code and solution verification are necessary prerequisites to validation. • Fuel centerline temperature comparisons through all phases of fuel life are very reasonable. • Accuracy in predicting fission gas release is consistent with state-of-the-art modeling and the involved uncertainties. • Rod diameter comparisons are not satisfactory and further investigation is underway. - Abstract: BISON is a modern finite element-based nuclear fuel performance code that has been under development at Idaho National Laboratory (INL) since 2009. The code is applicable to both steady and transient fuel behavior and has been used to analyze a variety of fuel forms in 1D spherical, 2D axisymmetric, or 3D geometries. Code validation is underway and is the subject of this study. A brief overview of BISON's computational framework, governing equations, and general material and behavioral models is provided. BISON code and solution verification procedures are described, followed by a summary of the experimental data used to date for validation of Light Water Reactor (LWR) fuel. Validation comparisons focus on fuel centerline temperature, fission gas release, and rod diameter both before and following fuel-clad mechanical contact. Comparisons for 35 LWR rods are consolidated to provide an overall view of how the code is predicting physical behavior, with a few select validation cases discussed in greater detail. Results demonstrate that (1) fuel centerline temperature comparisons through all phases of fuel life are very reasonable with deviations between predictions and experimental data within ±10% for early life through high burnup fuel and only slightly out of these bounds for power ramp experiments, (2) accuracy in predicting fission gas release appears to be consistent with state-of-the-art modeling and with the involved uncertainties and (3) comparison

  18. TRAC-PF1 code verification with data from the OTIS test facility

    International Nuclear Information System (INIS)

    Childerson, M.T.; Fujita, R.K.

    1985-01-01

    A computer code (TRAC-PF1/MOD1) developed for predicting transient thermal and hydraulic integral nuclear steam supply system (NSSS) response was benchmarked. Post-small break loss-of-coolant accident (LOCA) data from a scaled, experimental facility, designated the One-Through Integral System (OTIS), were obtained for the Babcock and Wilcox NSSS and compared to TRAC predictions. The OTIS tests provided a challenging small break LOCA data set for TRAC verification. The major phases of a small break LOCA observed in the OTIS tests included pressurizer draining and loop saturation, intermittent reactor coolant system circulation, boiler-condenser mode, and the initial stages of refill. The TRAC code was successful in predicting OTIS loop conditions (system pressures and temperatures) after modification of the steam generator model. In particular, the code predicted both pool and auxiliary-feedwater initiated boiler-condenser mode heat transfer

  19. TRAC-PF1 code verification with data from the OTIS test facility

    International Nuclear Information System (INIS)

    Childerson, M.T.; Fujits, R.K.

    1985-01-01

    A computer code (TRAC-PFI/MODI; denoted as TRAC) developed for predicting transient thermal and hydraulic integral nuclear steam supply system (NSSS) response was benchmarked. Post-small break loss-of-coolant accident (LOCA) data from a scaled, experimental facility, designated the Once-Through Integral Systems (OTIS), were obtained for the Babcock and Wilcox NSSS and compared to TRAC predictions. The OTIS tests provided a challenging small break LOCA data set for TRAC verification. The major phases of a small break LOCA observed in the OTIS tests included pressurizer draining and saturation, intermittent reactor coolant system circulation, boiler-condenser mode and the initial stages of refill. The TRAC code was successful in predicting OTIS loop conditions (system pressures and temperatures) after modification of the steam generator model. In particular, the code predicted both pool- and auxiliary- feedwater initiated boiler-condenser mode heat transfer

  20. Cracking the code of oscillatory activity.

    Directory of Open Access Journals (Sweden)

    Philippe G Schyns

    2011-05-01

    Full Text Available Neural oscillations are ubiquitous measurements of cognitive processes and dynamic routing and gating of information. The fundamental and so far unresolved problem for neuroscience remains to understand how oscillatory activity in the brain codes information for human cognition. In a biologically relevant cognitive task, we instructed six human observers to categorize facial expressions of emotion while we measured the observers' EEG. We combined state-of-the-art stimulus control with statistical information theory analysis to quantify how the three parameters of oscillations (i.e., power, phase, and frequency code the visual information relevant for behavior in a cognitive task. We make three points: First, we demonstrate that phase codes considerably more information (2.4 times relating to the cognitive task than power. Second, we show that the conjunction of power and phase coding reflects detailed visual features relevant for behavioral response--that is, features of facial expressions predicted by behavior. Third, we demonstrate, in analogy to communication technology, that oscillatory frequencies in the brain multiplex the coding of visual features, increasing coding capacity. Together, our findings about the fundamental coding properties of neural oscillations will redirect the research agenda in neuroscience by establishing the differential role of frequency, phase, and amplitude in coding behaviorally relevant information in the brain.

  1. WSPEEDI (worldwide version of SPEEDI): A computer code system for the prediction of radiological impacts on Japanese due to a nuclear accident in foreign countries

    Energy Technology Data Exchange (ETDEWEB)

    Chino, Masamichi; Yamazawa, Hiromi; Nagai, Haruyasu; Moriuchi, Shigeru [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ishikawa, Hirohiko

    1995-09-01

    A computer code system has been developed for near real-time dose assessment during radiological emergencies. The system WSPEEDI, the worldwide version of SPEEDI (System for Prediction of Environmental Emergency Dose Information) aims at predicting the radiological impact on Japanese due to a nuclear accident in foreign countries. WSPEEDI consists of a mass-consistent wind model WSYNOP for large-scale wind fields and a particle random walk model GEARN for atmospheric dispersion and dry and wet deposition of radioactivity. The models are integrated into a computer code system together with a system control software, worldwide geographic database, meteorological data processor and graphic software. The performance of the models has been evaluated using the Chernobyl case with reliable source terms, well-established meteorological data and a comprehensive monitoring database. Furthermore, the response of the system has been examined by near real-time simulations of the European Tracer Experiment (ETEX), carried out over about 2,000 km area in Europe. (author).

  2. WSPEEDI (worldwide version of SPEEDI): A computer code system for the prediction of radiological impacts on Japanese due to a nuclear accident in foreign countries

    International Nuclear Information System (INIS)

    Chino, Masamichi; Yamazawa, Hiromi; Nagai, Haruyasu; Moriuchi, Shigeru; Ishikawa, Hirohiko.

    1995-09-01

    A computer code system has been developed for near real-time dose assessment during radiological emergencies. The system WSPEEDI, the worldwide version of SPEEDI (System for Prediction of Environmental Emergency Dose Information) aims at predicting the radiological impact on Japanese due to a nuclear accident in foreign countries. WSPEEDI consists of a mass-consistent wind model WSYNOP for large-scale wind fields and a particle random walk model GEARN for atmospheric dispersion and dry and wet deposition of radioactivity. The models are integrated into a computer code system together with a system control software, worldwide geographic database, meteorological data processor and graphic software. The performance of the models has been evaluated using the Chernobyl case with reliable source terms, well-established meteorological data and a comprehensive monitoring database. Furthermore, the response of the system has been examined by near real-time simulations of the European Tracer Experiment (ETEX), carried out over about 2,000 km area in Europe. (author)

  3. Development of a subchannel analysis code MATRA (Ver. α)

    International Nuclear Information System (INIS)

    Yoo, Y. J.; Hwang, D. H.

    1998-04-01

    A subchannel analysis code MATRA-α, an interim version of MATRA, has been developed to be run on an IBM PC or HP WS based on the existing CDC CYBER mainframe version of COBRA-IV-I. This MATRA code is a thermal-hydraulic analysis code based on the subchannel approach for calculating the enthalpy and flow distribution in fuel assemblies and reactor cores for both steady-state and transient conditions. MATRA-α has been provided with an improved structure, various functions, and models to give the more convenient user environment and to increase the code accuracy, various functions, and models to give the more convenient user environment and to increase the code accuracy. Among them, the pressure drop model has been improved to be applied to non-square-lattice rod arrays, and the lateral transport models between adjacent subchannels have been improved to increase the accuracy in predicting two-phase flow phenomena. Also included in this report are the detailed instructions for input data preparation and for auxiliary pre-processors to serve as a guide to those who want to use MATRA-α. In addition, we compared the predictions of MATRA-α with the experimental data on the flow and enthalpy distribution in three sample rod-bundle cases to evaluate the performance of MATRA-α. All the results revealed that the prediction of MATRA-α were better than those of COBRA-IV-I. (author). 16 refs., 1 tab., 13 figs

  4. Comparison of several databases of downward solar daily irradiation data at ocean surface with PIRATA measurements

    Science.gov (United States)

    Trolliet, Mélodie; Wald, Lucien

    2017-04-01

    The solar radiation impinging at sea surface is an essential variable in climate system. There are several means to assess the daily irradiation at surface, such as pyranometers aboard ship or on buoys, meteorological re-analyses and satellite-derived databases. Among the latter, assessments made from the series of geostationary Meteosat satellites offer synoptic views of the tropical and equatorial Atlantic Ocean every 15 min with a spatial resolution of approximately 5 km. Such Meteosat-derived databases are fairly recent and the quality of the estimates of the daily irradiation must be established. Efforts have been made for the land masses and must be repeated for the Atlantic Ocean. The Prediction and Research Moored Array in the Tropical Atlantic (PIRATA) network of moorings in the Tropical Atlantic Ocean is considered as a reference for oceanographic data. It consists in 17 long-term Autonomous Temperature Line Acquisition System (ATLAS) buoys equipped with sensors to measure near-surface meteorological and subsurface oceanic parameters, including downward solar irradiation. Corrected downward solar daily irradiation from PIRATA were downloaded from the NOAA web site and were compared to several databases: CAMS RAD, HelioClim-1, HelioClim-3 v4 and HelioClim-3 v5. CAMS-RAD, the CAMS radiation service, combines products of the Copernicus Atmosphere Monitoring Service (CAMS) on gaseous content and aerosols in the atmosphere together with cloud optical properties deduced every 15 min from Meteosat imagery to supply estimates of the solar irradiation. Part of this service is the McClear clear sky model that provides estimates of the solar irradiation that should be observed in cloud-free conditions. The second and third databases are HelioClim-1 and HelioClim-3 v4 that are derived from Meteosat images using the Heliosat-2 method and the ESRA clear sky model, based on the Linke turbidity factor. HelioClim-3 v5 is the fourth database and differs from v4 by the

  5. Quantifying reactor safety margins: Application of CSAU [Code Scalability, Applicability and Uncertainty] methodology to LBLOCA: Part 3, Assessment and ranging of parameters for the uncertainty analysis of LBLOCA codes

    International Nuclear Information System (INIS)

    Wulff, W.; Boyack, B.E.; Duffey, R.B.

    1988-01-01

    Comparisons of results from TRAC-PF1/MOD1 code calculations with measurements from Separate Effects Tests, and published experimental data for modeling parameters have been used to determine the uncertainty ranges of code input and modeling parameters which dominate the uncertainty in predicting the Peak Clad Temperature for a postulated Large Break Loss of Coolant Accident (LBLOCA) in a four-loop Westinghouse Pressurized Water Reactor. The uncertainty ranges are used for a detailed statistical analysis to calculate the probability distribution function for the TRAC code-predicted Peak Clad Temperature, as is described in an attendant paper. Measurements from Separate Effects Tests and Integral Effects Tests have been compared with results from corresponding TRAC-PF1/MOD1 code calculations to determine globally the total uncertainty in predicting the Peak Clad Temperature for LBLOCAs. This determination is in support of the detailed statistical analysis mentioned above. The analyses presented here account for uncertainties in input parameters, in modeling and scaling, in computing and in measurements. The analyses are an important part of the work needed to implement the Code Scalability, Applicability and Uncertainty (CSAU) methodology. CSAU is needed to determine the suitability of a computer code for reactor safety analyses and the uncertainty in computer predictions. The results presented here are used to estimate the safety margin of a particular nuclear reactor power plant for a postulated accident. 25 refs., 10 figs., 11 tabs

  6. nocoRNAc: Characterization of non-coding RNAs in prokaryotes

    Directory of Open Access Journals (Sweden)

    Nieselt Kay

    2011-01-01

    Full Text Available Abstract Background The interest in non-coding RNAs (ncRNAs constantly rose during the past few years because of the wide spectrum of biological processes in which they are involved. This led to the discovery of numerous ncRNA genes across many species. However, for most organisms the non-coding transcriptome still remains unexplored to a great extent. Various experimental techniques for the identification of ncRNA transcripts are available, but as these methods are costly and time-consuming, there is a need for computational methods that allow the detection of functional RNAs in complete genomes in order to suggest elements for further experiments. Several programs for the genome-wide prediction of functional RNAs have been developed but most of them predict a genomic locus with no indication whether the element is transcribed or not. Results We present NOCORNAc, a program for the genome-wide prediction of ncRNA transcripts in bacteria. NOCORNAc incorporates various procedures for the detection of transcriptional features which are then integrated with functional ncRNA loci to determine the transcript coordinates. We applied RNAz and NOCORNAc to the genome of Streptomyces coelicolor and detected more than 800 putative ncRNA transcripts most of them located antisense to protein-coding regions. Using a custom design microarray we profiled the expression of about 400 of these elements and found more than 300 to be transcribed, 38 of them are predicted novel ncRNA genes in intergenic regions. The expression patterns of many ncRNAs are similarly complex as those of the protein-coding genes, in particular many antisense ncRNAs show a high expression correlation with their protein-coding partner. Conclusions We have developed NOCORNAc, a framework that facilitates the automated characterization of functional ncRNAs. NOCORNAc increases the confidence of predicted ncRNA loci, especially if they contain transcribed ncRNAs. NOCORNAc is not restricted to

  7. Double blind post-test prediction for LOBI-MOD2 small break experiment A2-81 using RELAP5/MOD1/19 computer code as contribution to international CSNI-standardproblem no. 18

    International Nuclear Information System (INIS)

    Jacobs, G.; Mansoor, S.H.

    1986-06-01

    The first small break experiment A2-81 performed in the LOBI-MOD2 test facility was the base of the 18th international CSNI standard problem (ISP 18). Taking part in this exercise, a blind post-test prediction was performed using the light water reactor transient analysis code RELAP5/MOD1. This paper describes the input model preparation and summarizes the findings of the pre-calculation comparing the calculational results with the experimental data. The results show that there was a good agreement between prediction and experiment in the initial stage (up to 250 sec) of the transient and an adequate prediction of the global behaviour (thermal response of the core), which is important for safety related considerations. However, the prediction confirmed some deficiencies of the models in the code concerning vertical and horizontal stratification resulting in a high break mass flow and an erroneous distribution of mass over the primary loops. (orig.) [de

  8. Improvement of a combustion model in MELCOR code

    International Nuclear Information System (INIS)

    Ogino, Masao; Hashimoto, Takashi

    1999-01-01

    NUPEC has been improving a hydrogen combustion model in MELCOR code for severe accident analysis. In the proposed combustion model, the flame velocity in a node was predicted using five different flame front shapes of fireball, prism, bubble, spherical jet, and plane jet. For validation of the proposed model, the results of the Battelle multi-compartment hydrogen combustion test were used. The selected test cases for the study were Hx-6, 13, 14, 20 and Ix-2 which had two, three or four compartments under homogeneous hydrogen concentration of 5 to 10 vol%. The proposed model could predict well the combustion behavior in multi-compartment containment geometry on the whole. MELCOR code, incorporating the present combustion model, can simulate combustion behavior during severe accident with acceptable computing time and some degree of accuracy. The applicability study of the improved MELCOR code to the actual reactor plants will be further continued. (author)

  9. Technical and tactical preparedness of the team «Helios» of Kharkov in the 25th football championship of Ukraine in the first league (the first round in 2015

    Directory of Open Access Journals (Sweden)

    Sergey Zhurid

    2016-04-01

    Full Text Available Purpose: to define model characteristics of technical and tactical preparedness of the team which participated in the championship of Ukraine of the first league for the purpose of further improvement and correction of the educational and training process. Material & Methods: researches were carried out by means of method expert estimates. 5 specialists of football were involved as experts. Results: average values of the registered sizes for 10 games were analyzed. Various technical and tactical actions and their differences for the first and second times, and also separate indicators of a game of players and the team «Helios» of Kharkov are analyzed. Conclusions: quantitative and qualitative indicators (flaw coefficient were received as on team technical and tactical actions, and separately on each technical-tactical technique for every period of a game.

  10. The WINCON programme - validation of fast reactor primary containment codes

    International Nuclear Information System (INIS)

    Sidoli, J.E.A.; Kendall, K.C.

    1988-01-01

    In the United Kingdom safety studies for the Commercial Demonstration Fast Reactor (CDFR) include an assessment of the capability of the primary containment in providing an adequate containment for defence against the hazards resulting from a hypothetical Whole Core Accident (WCA). The assessment is based on calculational estimates using computer codes supported by measured evidence from small-scale experiments. The hydrodynamic containment code SEURBNUK-EURDYN is capable of representing a prescribed energy release, the sodium coolant and cover gas, and the main containment and safety related internal structures. Containment loadings estimated using SEURBNUK-EURDYN are used in the structural dynamic code EURDYN-03 for the prediction of the containment response. The experiments serve two purposes, they demonstrate the response of the CDFR containment to accident loadings and provide data for the validation of the codes. This paper summarises the recently completed WINfrith CONtainment (WINCON) experiments that studied the response of specific features of current CDFR design options to WCA loadings. The codes have been applied to some of the experiments and a satisfactory prediction of the global response of the model containment is obtained. This provides confidence in the use of the codes in reactor assessments. (author)

  11. Signalign: An Ontology of DNA as Signal for Comparative Gene Structure Prediction Using Information-Coding-and-Processing Techniques.

    Science.gov (United States)

    Yu, Ning; Guo, Xuan; Gu, Feng; Pan, Yi

    2016-03-01

    Conventional character-analysis-based techniques in genome analysis manifest three main shortcomings-inefficiency, inflexibility, and incompatibility. In our previous research, a general framework, called DNA As X was proposed for character-analysis-free techniques to overcome these shortcomings, where X is the intermediates, such as digit, code, signal, vector, tree, graph network, and so on. In this paper, we further implement an ontology of DNA As Signal, by designing a tool named Signalign for comparative gene structure analysis, in which DNA sequences are converted into signal series, processed by modified method of dynamic time warping and measured by signal-to-noise ratio (SNR). The ontology of DNA As Signal integrates the principles and concepts of other disciplines including information coding theory and signal processing into sequence analysis and processing. Comparing with conventional character-analysis-based methods, Signalign can not only have the equivalent or superior performance, but also enrich the tools and the knowledge library of computational biology by extending the domain from character/string to diverse areas. The evaluation results validate the success of the character-analysis-free technique for improved performances in comparative gene structure prediction.

  12. Automation of RELAP5 input calibration and code validation using genetic algorithm

    International Nuclear Information System (INIS)

    Phung, Viet-Anh; Kööp, Kaspar; Grishchenko, Dmitry; Vorobyev, Yury; Kudinov, Pavel

    2016-01-01

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  13. Automation of RELAP5 input calibration and code validation using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)

    2016-04-15

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  14. User's manual for DSTAR MOD1: A comprehensive tokamak disruption code

    International Nuclear Information System (INIS)

    Merrill, B.J.; Jardin, S.J.

    1986-01-01

    A computer code, DSTAR, has recently been developed to quantify the surface erosion and induced forces that can occur during major tokamak plasma disruptions. The DSTAR code development effort has been accomplished by coupling a recently developed free boundary tokamak plasma transport computational model with other models developed to predict impurity transport and radiation, and the electromagnetic and thermal dynamic response of vacuum vessel components. The combined model, DSTAR, is a unique tool for predicting the consequences of tokamak disruptions. This informal report discusses the sequence of events of a resistive disruption, models developed to predict plasma transport and electromagnetic field evolution, the growth of the stochastic region of the plasma, the transport and nonequilibrium ionization/emitted radiation of the ablated vacuum vessel material, the vacuum vessel thermal and magnetic response, and user input and code output

  15. Analysis of CSNI benchmark test on containment using the code CONTRAN

    International Nuclear Information System (INIS)

    Haware, S.K.; Ghosh, A.K.; Raj, V.V.; Kakodkar, A.

    1994-01-01

    A programme of experimental as well as analytical studies on the behaviour of nuclear reactor containment is being actively pursued. A large number ol' experiments on pressure and temperature transients have been carried out on a one-tenth scale model vapour suppression pool containment experimental facility, simulating the 220 MWe Indian Pressurised Heavy Water Reactors. A programme of development of computer codes is underway to enable prediction of containment behaviour under accident conditions. This includes codes for pressure and temperature transients, hydrogen behaviour, aerosol behaviour etc. As a part of this ongoing work, the code CONTRAN (CONtainment TRansient ANalysis) has been developed for predicting the thermal hydraulic transients in a multicompartment containment. For the assessment of the hydrogen behaviour, the models for hydrogen transportation in a multicompartment configuration and hydrogen combustion have been incorporated in the code CONTRAN. The code also has models for the heat and mass transfer due to condensation and convection heat transfer. The structural heat transfer is modeled using the one-dimensional transient heat conduction equation. Extensive validation exercises have been carried out with the code CONTRAN. The code CONTRAN has been successfully used for the analysis of the benchmark test devised by Committee on the Safety of Nuclear Installations (CSNI) of the Organisation for Economic Cooperation and Development (OECD), to test the numerical accuracy and convergence errors in the computation of mass and energy conservation for the fluid and in the computation of heat conduction in structural walls. The salient features of the code CONTRAN, description of the CSNI benchmark test and a comparison of the CONTRAN predictions with the benchmark test results are presented and discussed in the paper. (author)

  16. Preliminary Analysis of Rapid Condensation Experiment with MARS-KS Code

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Jae Ho; Jun, Hwang Yong; Jeong, Hae Yong [Sejong University, Seoul (Korea, Republic of)

    2016-05-15

    In the present study, the rapid condensation experiment performed in MANOTEA facility is analyzed with the MARS-KS code. It is known that there exists some limitation with a system code to predict this kind of a very active condensation due to direct mixing of cold injection flow and steam. Through the analysis we investigated the applicability of MARS-KS code for the design of various passive safety systems in the future. The configuration of the experimental facility MANOTEA, which has been constructed at the University of Maryland - United States Naval Academy, is described and the modeling approach using the MARS-KS code is also provided. The preliminary result shows that the MARS-KS predicts the general trend of pressure and temperature in the condensing part correctly. However, it is also found that there exist some limitations in the simulation such as an unexpected pressure peak or a sudden temperature change.

  17. RADTRAN II: revised computer code to analyze transportation of radioactive material

    International Nuclear Information System (INIS)

    Taylor, J.M.; Daniel, S.L.

    1982-10-01

    A revised and updated version of the RADTRAN computer code is presented. This code has the capability to predict the radiological impacts associated with specific schemes of radioactive material shipments and mode specific transport variables

  18. Cavitation Modeling in Euler and Navier-Stokes Codes

    Science.gov (United States)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Many previous researchers have modeled sheet cavitation by means of a constant pressure solution in the cavity region coupled with a velocity potential formulation for the outer flow. The present paper discusses the issues involved in extending these cavitation models to Euler or Navier-Stokes codes. The approach taken is to start from a velocity potential model to ensure our results are compatible with those of previous researchers and available experimental data, and then to implement this model in both Euler and Navier-Stokes codes. The model is then augmented in the Navier-Stokes code by the inclusion of the energy equation which allows the effect of subcooling in the vicinity of the cavity interface to be modeled to take into account the experimentally observed reduction in cavity pressures that occurs in cryogenic fluids such as liquid hydrogen. Although our goal is to assess the practicality of implementing these cavitation models in existing three-dimensional, turbomachinery codes, the emphasis in the present paper will center on two-dimensional computations, most specifically isolated airfoils and cascades. Comparisons between velocity potential, Euler and Navier-Stokes implementations indicate they all produce consistent predictions. Comparisons with experimental results also indicate that the predictions are qualitatively correct and give a reasonable first estimate of sheet cavitation effects in both cryogenic and non-cryogenic fluids. The impact on CPU time and the code modifications required suggests that these models are appropriate for incorporation in current generation turbomachinery codes.

  19. Aeroelastic code development activities in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Wright, A.D. [National Renewable Energy Lab., Golden, Colorado (United States)

    1996-09-01

    Designing wind turbines to be fatigue resistant and to have long lifetimes at minimal cost is a major goal of the federal wind program and the wind industry in the United States. To achieve this goal, we must be able to predict critical loads for a wide variety of different wind turbines operating under extreme conditions. The codes used for wind turbine dynamic analysis must be able to analyze a wide range of different wind turbine configurations as well as rapidly predict the loads due to turbulent wind inflow with a minimal set of degrees of freedom. Code development activities in the US have taken a two-pronged approach in order to satisfy both of these criteria: (1) development of a multi-purpose code which can be used to analyze a wide variety of wind turbine configurations without having to develop new equations of motion with each configuration change, and (2) development of specialized codes with minimal sets of specific degrees of freedom for analysis of two- and three-bladed horizontal axis wind turbines and calculation of machine loads due to turbulent inflow. In the first method we have adapted a commercial multi-body dynamics simulation package for wind turbine analysis. In the second approach we are developing specialized codes with limited degrees of freedom, usually specified in the modal domain. This paper will summarize progress to date in the development, validation, and application of these codes. (au) 13 refs.

  20. Code requirements document: MODFLOW 2.1: A program for predicting moderator flow patterns

    International Nuclear Information System (INIS)

    Peterson, P.F.

    1992-03-01

    Sudden changes in the temperature of flowing liquids can result in transient buoyancy forces which strongly impact the flow hydrodynamics via flow stratification. These effects have been studied for the case of potential flow of stratified liquids to line sinks, but not for moderator flow in SRS reactors. Standard codes, such as TRAC and COMMIX, do not have the capability to capture the stratification effect, due to strong numerical diffusion which smears away the hot/cold fluid interface. A related problem with standard codes is the inability to track plumes injected into the liquid flow, again due to numerical diffusion. The combined effects of buoyant stratification and plume dispersion have been identified as being important in operation of the Supplementary Safety System which injects neutron-poison ink into SRS reactors to provide safe shutdown in the event of safety rod failure. The MODFLOW code discussed here provides transient moderator flow pattern information with stratification effects, and tracks the location of ink plumes in the reactor. The code, written in Fortran, is compiled for Macintosh II computers, and includes subroutines for interactive control and graphical output. Removing the graphics capabilities, the code can also be compiled on other computers. With graphics, in addition to the capability to perform safety related computations, MODFLOW also provides an easy tool for becoming familiar with flow distributions in SRS reactors

  1. Development of input data to energy code for analysis of reactor fuel bundles

    International Nuclear Information System (INIS)

    Carre, F.O.; Todreas, N.E.

    1975-05-01

    The ENERGY 1 code is a semi-empirical method for predicting temperature distributions in wire wrapped rod bundles of a LMFBR. A comparison of ENERGY 1 and MISTRAL 2 is presented. The predictions of ENERGY 1 for special sets of data taken under geometric conditions at the limits of the code are analyzed. 14 references

  2. Microfocusing of the FERMI@Elettra FEL beam with a K–B active optics system: Spot size predictions by application of the WISE code

    International Nuclear Information System (INIS)

    Raimondi, L.; Svetina, C.; Mahne, N.; Cocco, D.; Abrami, A.; De Marco, M.; Fava, C.; Gerusina, S.; Gobessi, R.; Capotondi, F.; Pedersoli, E.; Kiskinova, M.; De Ninno, G.; Zeitoun, P.; Dovillaire, G.; Lambert, G.; Boutu, W.; Merdji, H.; Gonzalez, A.I.; Gauthier, D.

    2013-01-01

    FERMI@Elettra, the first seeded EUV-SXR free electron laser (FEL) facility located at Elettra Sincrotrone Trieste has been conceived to provide very short (10–100 fs) pulses with ultrahigh peak brightness and wavelengths from 100 nm to 4 nm. A section fully dedicated to the photon transport and analysis diagnostics, named PADReS, has already been installed and commissioned. Three of the beamlines, EIS-TIMEX, DiProI and LDM, installed after the PADReS section, are in advanced commissioning state and will accept the first users in December 2012. These beam lines employ active X-ray optics in order to focus the FEL beam as well as to perform a controlled beam-shaping at focus. Starting from mirror surface metrology characterization, it is difficult to predict the focal spot shape applying only methods based on geometrical optics such as the ray tracing. Within the geometrical optics approach one cannot take into account the diffraction effect from the optics edges, i.e. the aperture diffraction, and the impact of different surface spatial wavelengths to the spot size degradation. Both these effects are strongly dependent on the photon beam energy and mirror incident angles. We employed a method based on physical optics, which applies the Huygens–Fresnel principle to reflection (on which the WISE code is based). In this work we report the results of the first measurements of the focal spot in the DiProI beamline end-station and compare them to the predictions computed with Shadow code and WISE code, starting from the mirror surface profile characterization

  3. Microfocusing of the FERMI@Elettra FEL beam with a K–B active optics system: Spot size predictions by application of the WISE code

    Energy Technology Data Exchange (ETDEWEB)

    Raimondi, L., E-mail: lorenzo.raimondi@elettra.trieste.it [Sincrotrone Trieste ScpA, S.S. 14 km 163.5 in Area Science Park, 34149 Trieste (Italy); Svetina, C.; Mahne, N. [Sincrotrone Trieste ScpA, S.S. 14 km 163.5 in Area Science Park, 34149 Trieste (Italy); Cocco, D. [SLAC National Accelerator Laboratory, 2575 Sand Hill Road, MS-19 Menlo Park, CA 94025 (United States); Abrami, A.; De Marco, M.; Fava, C.; Gerusina, S.; Gobessi, R.; Capotondi, F.; Pedersoli, E.; Kiskinova, M. [Sincrotrone Trieste ScpA, S.S. 14 km 163.5 in Area Science Park, 34149 Trieste (Italy); De Ninno, G. [Sincrotrone Trieste ScpA, S.S. 14 km 163.5 in Area Science Park, 34149 Trieste (Italy); University of Nova Gorica, Vipavska 13, Rozna Dolina, SI-5000 Nova Gorica (Slovenia); Zeitoun, P. [Laboratoire d' Optique Appliquée, CNRS-ENSTA-École Polytechnique, Chemin de la Humiére, 91761 Palaiseau (France); Dovillaire, G. [Imagine Optic, 18 Rue Charles de Gaulle, 91400 Orsay (France); Lambert, G. [Laboratoire d' Optique Appliquée, CNRS-ENSTA-École Polytechnique, Chemin de la Humiére, 91761 Palaiseau (France); Boutu, W.; Merdji, H.; Gonzalez, A.I. [Service des Photons, Atomes et Molécules, IRAMIS, CEA-Saclay, Btiment 522, 91191 Gif-sur-Yvette (France); Gauthier, D. [University of Nova Gorica, Vipavska 13, Rozna Dolina, SI-5000 Nova Gorica (Slovenia); and others

    2013-05-11

    FERMI@Elettra, the first seeded EUV-SXR free electron laser (FEL) facility located at Elettra Sincrotrone Trieste has been conceived to provide very short (10–100 fs) pulses with ultrahigh peak brightness and wavelengths from 100 nm to 4 nm. A section fully dedicated to the photon transport and analysis diagnostics, named PADReS, has already been installed and commissioned. Three of the beamlines, EIS-TIMEX, DiProI and LDM, installed after the PADReS section, are in advanced commissioning state and will accept the first users in December 2012. These beam lines employ active X-ray optics in order to focus the FEL beam as well as to perform a controlled beam-shaping at focus. Starting from mirror surface metrology characterization, it is difficult to predict the focal spot shape applying only methods based on geometrical optics such as the ray tracing. Within the geometrical optics approach one cannot take into account the diffraction effect from the optics edges, i.e. the aperture diffraction, and the impact of different surface spatial wavelengths to the spot size degradation. Both these effects are strongly dependent on the photon beam energy and mirror incident angles. We employed a method based on physical optics, which applies the Huygens–Fresnel principle to reflection (on which the WISE code is based). In this work we report the results of the first measurements of the focal spot in the DiProI beamline end-station and compare them to the predictions computed with Shadow code and WISE code, starting from the mirror surface profile characterization.

  4. Development and verification test of integral reactor major components - Development of MCP impeller design, performance prediction code and experimental verification

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Myung Kyoon; Oh, Woo Hyoung; Song, Jae Wook [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    1999-03-01

    The present study is aimed at developing a computational code for design and performance prediction of an axial-flow pump. The proposed performance prediction method is tested against a model axial-flow pump streamline curvature method. The preliminary design is made by using the ideal velocity triangles at inlet and exit and the three dimensional blade shape is calculated by employing the free vortex design method. Then the detailed blading design is carried out by using experimental database of double circular arc cambered hydrofoils. To computationally determine the design incidence, deviation, blade camber, solidity and stagger angle, a number of correlation equations are developed form the experimental database and a theorical formula for the lift coefficient is adopted. A total of 8 equations are solved iteratively using an under-relaxation factor. An experimental measurement is conducted under a non-cavitating condition to obtain the off-design performance curve and also a cavitation test is carried out by reducing the suction pressure. The experimental results are very satisfactorily compared with the predictions by the streamline curvature method. 28 refs., 26 figs., 11 tabs. (Author)

  5. The MELTSPREAD Code for Modeling of Ex-Vessel Core Debris Spreading Behavior, Code Manual – Version3-beta

    Energy Technology Data Exchange (ETDEWEB)

    Farmer, M. T. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-09-01

    MELTSPREAD3 is a transient one-dimensional computer code that has been developed to predict the gravity-driven flow and freezing behavior of molten reactor core materials (corium) in containment geometries. Predictions can be made for corium flowing across surfaces under either dry or wet cavity conditions. The spreading surfaces that can be selected are steel, concrete, a user-specified material (e.g., a ceramic), or an arbitrary combination thereof. The corium can have a wide range of compositions of reactor core materials that includes distinct oxide phases (predominantly Zr, and steel oxides) plus metallic phases (predominantly Zr and steel). The code requires input that describes the containment geometry, melt “pour” conditions, and cavity atmospheric conditions (i.e., pressure, temperature, and cavity flooding information). For cases in which the cavity contains a preexisting water layer at the time of RPV failure, melt jet breakup and particle bed formation can be calculated mechanistically given the time-dependent melt pour conditions (input data) as well as the heatup and boiloff of water in the melt impingement zone (calculated). For core debris impacting either the containment floor or previously spread material, the code calculates the transient hydrodynamics and heat transfer which determine the spreading and freezing behavior of the melt. The code predicts conditions at the end of the spreading stage, including melt relocation distance, depth and material composition profiles, substrate ablation profile, and wall heatup. Code output can be used as input to other models such as CORQUENCH that evaluate long term core-concrete interaction behavior following the transient spreading stage. MELTSPREAD3 was originally developed to investigate BWR Mark I liner vulnerability, but has been substantially upgraded and applied to other reactor designs (e.g., the EPR), and more recently to the plant accidents at Fukushima Daiichi. The most recent round of

  6. Lost opportunities: Modeling commercial building energy code adoption in the United States

    International Nuclear Information System (INIS)

    Nelson, Hal T.

    2012-01-01

    This paper models the adoption of commercial building energy codes in the US between 1977 and 2006. Energy code adoption typically results in an increase in aggregate social welfare by cost effectively reducing energy expenditures. Using a Cox proportional hazards model, I test if relative state funding, a new, objective, multivariate regression-derived measure of government capacity, as well as a vector of control variables commonly used in comparative state research, predict commercial building energy code adoption. The research shows little political influence over historical commercial building energy code adoption in the sample. Colder climates and higher electricity prices also do not predict more frequent code adoptions. I do find evidence of high government capacity states being 60 percent more likely than low capacity states to adopt commercial building energy codes in the following year. Wealthier states are also more likely to adopt commercial codes. Policy recommendations to increase building code adoption include increasing access to low cost capital for the private sector and providing noncompetitive block grants to the states from the federal government. - Highlights: ► Model the adoption of commercial building energy codes from 1977–2006 in the US. ► Little political influence over historical building energy code adoption. ► High capacity states are over 60 percent more likely than low capacity states to adopt codes. ► Wealthier states are more likely to adopt commercial codes. ► Access to capital and technical assistance is critical to increase code adoption.

  7. Rhythmic complexity and predictive coding: a novel approach to modeling rhythm and meter perception in music

    Science.gov (United States)

    Vuust, Peter; Witek, Maria A. G.

    2014-01-01

    Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding (PC) as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of PC, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a PC model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (“rhythm”) and the brain’s anticipatory structuring of music (“meter”). Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the PC theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms. PMID:25324813

  8. Simulation of transport in the ignited ITER with 1.5-D predictive code

    International Nuclear Information System (INIS)

    Becker, G.

    1995-01-01

    The confinement in the bulk and scrape-off layer plasmas of the ITER EDA and CDA is investigated with special versions of the 1.5-D BALDUR predictive transport code for the case of peaked density profiles (C υ = 1.0). The code self-consistently computes 2-D equilibria and solves 1-D transport equations with empirical transport coefficients for the ohmic, L and ELMy H mode regimes. Self-sustained steady state thermonuclear burn is demonstrated for up to 500 s. It is shown to be compatible with the strong radiation losses for divertor heat load reduction caused by the seeded impurities iron, neon and argon. The corresponding global and local energy and particle transport are presented. The required radiation corrected energy confinement times of the EDA and CDA are found to be close to 4 s. In the reference cases, the steady state helium fraction is 7%. The fractions of iron, neon and argon needed for the prescribed radiative power loss are given. It is shown that high radiative losses from the confinement zone, mainly by bremsstrahlung, cannot be avoided. The radiation profiles of iron and argon are found to be the same, with two thirds of the total radiation being emitted from closed flux surfaces. Fuel dilution due to iron and argon is small. The neon radiation is more peripheral. But neon is found to cause high fuel dilution. The combined dilution effect by helium and neon conflicts with burn control, self-sustained burn and divertor power reduction. Raising the helium fraction above 10% leads to the same difficulties owing to fuel dilution. The high helium levels of the present EDA design are thus unacceptable. The bootstrap current has only a small impact on the current profile. The sawtooth dominated region is found to cover 35% of the plasma cross-section. Local stability analysis of ideal ballooning modes shows that the plasma is everywhere well below the stability limit. 23 refs, 34 figs, 3 tabs

  9. A Study of Performance in Low-Power Tokamak Reactor with Integrated Predictive Modeling Code

    International Nuclear Information System (INIS)

    Pianroj, Y.; Onjun, T.; Suwanna, S.; Picha, R.; Poolyarat, N.

    2009-07-01

    Full text: A fusion hybrid or a small fusion power output with low power tokamak reactor is presented as another useful application of nuclear fusion. Such tokamak can be used for fuel breeding, high-level waste transmutation, hydrogen production at high temperature, and testing of nuclear fusion technology components. In this work, an investigation of the plasma performance in a small fusion power output design is carried out using the BALDUR predictive integrated modeling code. The simulations of the plasma performance in this design are carried out using the empirical-based Mixed Bohm/gyro Bohm (B/gB) model, whereas the pedestal temperature model is based on magnetic and flow shear (δ α ρ ζ 2 ) stabilization pedestal width scaling. The preliminary results using this core transport model show that the central ion and electron temperatures are rather pessimistic. To improve the performance, the optimization approach are carried out by varying some parameters, such as plasma current and power auxiliary heating, which results in some improvement of plasma performance

  10. Mode-dependent templates and scan order for H.264/AVC-based intra lossless coding.

    Science.gov (United States)

    Gu, Zhouye; Lin, Weisi; Lee, Bu-Sung; Lau, Chiew Tong; Sun, Ming-Ting

    2012-09-01

    In H.264/advanced video coding (AVC), lossless coding and lossy coding share the same entropy coding module. However, the entropy coders in the H.264/AVC standard were original designed for lossy video coding and do not yield adequate performance for lossless video coding. In this paper, we analyze the problem with the current lossless coding scheme and propose a mode-dependent template (MD-template) based method for intra lossless coding. By exploring the statistical redundancy of the prediction residual in the H.264/AVC intra prediction modes, more zero coefficients are generated. By designing a new scan order for each MD-template, the scanned coefficients sequence fits the H.264/AVC entropy coders better. A fast implementation algorithm is also designed. With little computation increase, experimental results confirm that the proposed fast algorithm achieves about 7.2% bit saving compared with the current H.264/AVC fidelity range extensions high profile.

  11. Data processing codes for fatigue and tensile tests

    International Nuclear Information System (INIS)

    Sanchez Sarmiento, Gustavo; Iorio, A.F.; Crespi, J.C.

    1981-01-01

    The processing of fatigue and tensile tests data in order to obtain several parameters of engineering interest requires a considerable effort of numerical calculus. In order to reduce the time spent in this work and to establish standard data processing from a set of similar type tests, it is very advantageous to have a calculation code for running in a computer. Two codes have been developed in FORTRAN language; one of them predicts cyclic properties of materials from the monotonic and incremental or multiple cyclic step tests (ENSPRED CODE), and the other one reduces data coming from strain controlled low cycle fatigue tests (ENSDET CODE). Two examples are included using Zircaloy-4 material from different manufacturers. (author) [es

  12. Analysis of ATLAS FLB-EC6 Experiment using SPACE Code

    International Nuclear Information System (INIS)

    Lee, Donghyuk; Kim, Yohan; Kim, Seyun

    2013-01-01

    The new code is named SPACE(Safety and Performance Analysis Code for Nuclear Power Plant). As a part of code validation effort, simulation of ATLAS FLB(Feedwater Line Break) experiment using SPACE code has been performed. The FLB-EC6 experiment is economizer break of a main feedwater line. The calculated results using the SPACE code are compared with those from the experiment. The ATLAS FLB-EC6 experiment, which is economizer feedwater line break, was simulated using the SPACE code. The calculated results were compared with those from the experiment. The comparisons of break flow rate and steam generator water level show good agreement with the experiment. The SPACE code is capable of predicting physical phenomena occurring during ATLAS FLB-EC6 experiment

  13. Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding

    Science.gov (United States)

    Oh, Kwan-Jung; Oh, Byung Tae

    2015-04-01

    We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.

  14. TESLA: Large Signal Simulation Code for Klystrons

    International Nuclear Information System (INIS)

    Vlasov, Alexander N.; Cooke, Simon J.; Chernin, David P.; Antonsen, Thomas M. Jr.; Nguyen, Khanh T.; Levush, Baruch

    2003-01-01

    TESLA (Telegraphist's Equations Solution for Linear Beam Amplifiers) is a new code designed to simulate linear beam vacuum electronic devices with cavities, such as klystrons, extended interaction klystrons, twistrons, and coupled cavity amplifiers. The model includes a self-consistent, nonlinear solution of the three-dimensional electron equations of motion and the solution of time-dependent field equations. The model differs from the conventional Particle in Cell approach in that the field spectrum is assumed to consist of a carrier frequency and its harmonics with slowly varying envelopes. Also, fields in the external cavities are modeled with circuit like equations and couple to fields in the beam region through boundary conditions on the beam tunnel wall. The model in TESLA is an extension of the model used in gyrotron code MAGY. The TESLA formulation has been extended to be capable to treat the multiple beam case, in which each beam is transported inside its own tunnel. The beams interact with each other as they pass through the gaps in their common cavities. The interaction is treated by modification of the boundary conditions on the wall of each tunnel to include the effect of adjacent beams as well as the fields excited in each cavity. The extended version of TESLA for the multiple beam case, TESLA-MB, has been developed for single processor machines, and can run on UNIX machines and on PC computers with a large memory (above 2GB). The TESLA-MB algorithm is currently being modified to simulate multiple beam klystrons on multiprocessor machines using the MPI (Message Passing Interface) environment. The code TESLA has been verified by comparison with MAGIC for single and multiple beam cases. The TESLA code and the MAGIC code predict the same power within 1% for a simple two cavity klystron design while the computational time for TESLA is orders of magnitude less than for MAGIC 2D. In addition, recently TESLA was used to model the L-6048 klystron, code

  15. Probabilistic evaluations for CANTUP computer code analysis improvement

    International Nuclear Information System (INIS)

    Florea, S.; Pavelescu, M.

    2004-01-01

    Structural analysis with finite element method is today an usual way to evaluate and predict the behavior of structural assemblies subject to hard conditions in order to ensure their safety and reliability during their operation. A CANDU 600 fuel channel is an example of an assembly working in hard conditions, in which, except the corrosive and thermal aggression, long time irradiation, with implicit consequences on material properties evolution, interferes. That leads inevitably to material time-dependent properties scattering, their dynamic evolution being subject to a great degree of uncertainness. These are the reasons for developing, in association with deterministic evaluations with computer codes, the probabilistic and statistical methods in order to predict the structural component response. This work initiates the possibility to extend the deterministic thermomechanical evaluation on fuel channel components to probabilistic structural mechanics approach starting with deterministic analysis performed with CANTUP computer code which is a code developed to predict the long term mechanical behavior of the pressure tube - calandria tube assembly. To this purpose the structure of deterministic calculus CANTUP computer code has been reviewed. The code has been adapted from LAHEY 77 platform to Microsoft Developer Studio - Fortran Power Station platform. In order to perform probabilistic evaluations, it was added a part to the deterministic code which, using a subroutine from IMSL library from Microsoft Developer Studio - Fortran Power Station platform, generates pseudo-random values of a specified value. It was simulated a normal distribution around the deterministic value and 5% standard deviation for Young modulus material property in order to verify the statistical calculus of the creep behavior. The tube deflection and effective stresses were the properties subject to probabilistic evaluation. All the values of these properties obtained for all the values for

  16. Development of the neutron-transport code TransRay and studies on the two- and three-dimensional calculation of effective group cross sections; Entwicklung des Neutronentransportcodes TransRay und Untersuchungen zur zwei- und dreidimensionalen Berechnung effektiver Gruppenwirkungsquerschnitte

    Energy Technology Data Exchange (ETDEWEB)

    Beckert, C.

    2007-12-19

    Conventionally the data preparation of the neutron cross sections for reactor-core calculations pursues with 2D cell codes. Aim of this thesis was, to develop a 3D cell code, to study with this code 3D effects, and to evaluate the necessarity of a 3D data preparation of the neutron cross sections. For the calculation of the neutron transport the method of the first-collision probabilities, which are calculated with the ray-tracing method, was chosen. The mathematical algorithms were implemented in the 2D/3D cell code TransRay. For the geometry part of the program the geometry module of a Monte Carlo code was used.The ray tracing in 3D was parallelized because of the high computational time. The program TransRay was verified on 2D test problems. For a reference pressured-water reactor following 3D problems were studied: A partly immersed control rod and void (vacuum or steam) around a fuel rod as model of a steam void. All problems were for comparison calculated also with the programs HELIOS(2D) and MCNP(3D). The dependence of the multiplication factor and the averaged two-group cross section on the immersion depth of the control rod respectively of the height of the steam void were studied. The 3D-calculated two-group cross sections were compared with three conventional approximations: Linear interpolation, interpolation with flux weighting, and homogenization, At the 3D problem of the control rod it was shown that the interpolation with flux weighting is a good approximation. Therefore here a 3D data preparation is not necessary. At the test case of the single control rod, which is surrounded by the void, the three approximation for the two-group cross sections were proved as unsufficient. Therefore a 3D data preparation is necessary. The single fuel-rod cell with void can be considered as the limiting case of a reactor, in which a phase interface has been formed. [German] Standardmaessig erfolgt die Datenaufbereitung der Neutronenwirkungsquerschnitte fuer

  17. Future trends in image coding

    Science.gov (United States)

    Habibi, Ali

    1993-01-01

    The objective of this article is to present a discussion on the future of image data compression in the next two decades. It is virtually impossible to predict with any degree of certainty the breakthroughs in theory and developments, the milestones in advancement of technology and the success of the upcoming commercial products in the market place which will be the main factors in establishing the future stage to image coding. What we propose to do, instead, is look back at the progress in image coding during the last two decades and assess the state of the art in image coding today. Then, by observing the trends in developments of theory, software, and hardware coupled with the future needs for use and dissemination of imagery data and the constraints on the bandwidth and capacity of various networks, predict the future state of image coding. What seems to be certain today is the growing need for bandwidth compression. The television is using a technology which is half a century old and is ready to be replaced by high definition television with an extremely high digital bandwidth. Smart telephones coupled with personal computers and TV monitors accommodating both printed and video data will be common in homes and businesses within the next decade. Efficient and compact digital processing modules using developing technologies will make bandwidth compressed imagery the cheap and preferred alternative in satellite and on-board applications. In view of the above needs, we expect increased activities in development of theory, software, special purpose chips and hardware for image bandwidth compression in the next two decades. The following sections summarize the future trends in these areas.

  18. Erosion corrosion in power plant piping systems - Calculation code for predicting wall thinning

    International Nuclear Information System (INIS)

    Kastner, W.; Erve, M.; Henzel, N.; Stellwag, B.

    1990-01-01

    Extensive experimental and theoretical investigations have been performed to develop a calculation code for wall thinning due to erosion corrosion in power plant piping systems. The so-called WATHEC code can be applied to single-phase water flow as well as to two-phase water/steam flow. Only input data which are available to the operator of the plant are taken into consideration. Together with a continuously updated erosion corrosion data base the calculation code forms one element of a weak point analysis for power plant piping systems which can be applied to minimize material loss due to erosion corrosion, reduce non-destructive testing and curtail monitoring programs for piping systems, recommend life-extending measures. (author). 12 refs, 17 figs

  19. Study in stationary state of the subcriticality of intermediate configurations of core in the reloading process of a BWR

    International Nuclear Information System (INIS)

    Hernandez, J.L.; Montes, J.L.; Perusquia, R.; Ortiz, J.J.

    2006-01-01

    In this work is carried out the simulation in three dimensions with the COREMASTER-PRESTO code, of the behavior of the reactor core in different stages of the change process of fuel assemblies. To carry out the simulation, this code requires of a database of nuclear parameters that includes those that can associate to the areas of an assemblies that they don't contain fuel and in its place there is moderator. These nuclear parameters are calculated with the AURORA-HELIOS-ZENITH-TABGEN system. One of the approaches that were carried out consisted on designing a 'water assemble', that is to say, an axial arrangement of 25 'water cells'. To obtain the appropriate 'water cell' its were carried out some selective test cases, since it presents in two cases the necessity to find an enough minimum value of fissile material for the correct execution of HELIOS, firstly, and later on COREMASTER-PRESTO. In the first case, the situation is solved when placing symmetrically 6 bars with natural uranium in the lateral areas of the cell; with that which the value of k inf of 0.1592 is obtained in the calculations with the HELIOS code in the cold condition to zero power (CZP), and 0% of vacuums. For the second case the cell includes symmetrically 28 bars with natural uranium, and the k inf value is 0.45290. These values are the maximum through the life of the 'cell.' As part of the activities that are developed during the fuel substitution, this the one of evaluating the subcriticality of the core each determined number of substitution movements. The obtained results when evaluating the k-effective in cold condition, in 5 different intermediate core configurations, as the loading process of the fuel advances are presented. To make the evaluation with CM-PRESTO in each configuration it was proceeded to complete the rest of the 444 assemblies with the one denominated 'water assemble'. In all the evaluated cases the subcriticality of the core was demonstrated in cold condition and with

  20. Heterogeneous fuels for minor actinides transmutation: Fuel performance codes predictions in the EFIT case study

    Energy Technology Data Exchange (ETDEWEB)

    Calabrese, R., E-mail: rolando.calabrese@enea.i [ENEA, Innovative Nuclear Reactors and Fuel Cycle Closure Division, via Martiri di Monte Sole 4, 40129 Bologna (Italy); Vettraino, F.; Artioli, C. [ENEA, Innovative Nuclear Reactors and Fuel Cycle Closure Division, via Martiri di Monte Sole 4, 40129 Bologna (Italy); Sobolev, V. [SCK.CEN, Belgian Nuclear Research Centre, Boeretang 200, B-2400 Mol (Belgium); Thetford, R. [Serco Technical and Assurance Services, 150 Harwell Business Centre, Didcot OX11 0QB (United Kingdom)

    2010-06-15

    . Presented results were used for testing newly-developed models installed in the TRANSURANUS code to deal with such innovative fuels and T91 steel cladding. Agreement among codes predictions was satisfactory for fuel and cladding temperatures, pellet-cladding gap and mechanical stresses.

  1. NASA space radiation transport code development consortium

    International Nuclear Information System (INIS)

    Townsend, L. W.

    2005-01-01

    Recently, NASA established a consortium involving the Univ. of Tennessee (lead institution), the Univ. of Houston, Roanoke College and various government and national laboratories, to accelerate the development of a standard set of radiation transport computer codes for NASA human exploration applications. This effort involves further improvements of the Monte Carlo codes HETC and FLUKA and the deterministic code HZETRN, including developing nuclear reaction databases necessary to extend the Monte Carlo codes to carry out heavy ion transport, and extending HZETRN to three dimensions. The improved codes will be validated by comparing predictions with measured laboratory transport data, provided by an experimental measurements consortium, and measurements in the upper atmosphere on the balloon-borne Deep Space Test Bed (DSTB). In this paper, we present an overview of the consortium members and the current status and future plans of consortium efforts to meet the research goals and objectives of this extensive undertaking. (authors)

  2. International assessment of PCA codes

    International Nuclear Information System (INIS)

    Neymotin, L.; Lui, C.; Glynn, J.; Archarya, S.

    1993-11-01

    Over the past three years (1991-1993), an extensive international exercise for intercomparison of a group of six Probabilistic Consequence Assessment (PCA) codes was undertaken. The exercise was jointly sponsored by the Commission of European Communities (CEC) and OECD Nuclear Energy Agency. This exercise was a logical continuation of a similar effort undertaken by OECD/NEA/CSNI in 1979-1981. The PCA codes are currently used by different countries for predicting radiological health and economic consequences of severe accidents at nuclear power plants (and certain types of non-reactor nuclear facilities) resulting in releases of radioactive materials into the atmosphere. The codes participating in the exercise were: ARANO (Finland), CONDOR (UK), COSYMA (CEC), LENA (Sweden), MACCS (USA), and OSCAAR (Japan). In parallel with this inter-code comparison effort, two separate groups performed a similar set of calculations using two of the participating codes, MACCS and COSYMA. Results of the intercode and inter-MACCS comparisons are presented in this paper. The MACCS group included four participants: GREECE: Institute of Nuclear Technology and Radiation Protection, NCSR Demokritos; ITALY: ENEL, ENEA/DISP, and ENEA/NUC-RIN; SPAIN: Universidad Politecnica de Madrid (UPM) and Consejo de Seguridad Nuclear; USA: Brookhaven National Laboratory, US NRC and DOE

  3. Translation of ARAC computer codes

    International Nuclear Information System (INIS)

    Takahashi, Kunio; Chino, Masamichi; Honma, Toshimitsu; Ishikawa, Hirohiko; Kai, Michiaki; Imai, Kazuhiko; Asai, Kiyoshi

    1982-05-01

    In 1981 we have translated the famous MATHEW, ADPIC and their auxiliary computer codes for CDC 7600 computer version to FACOM M-200's. The codes consist of a part of the Atmospheric Release Advisory Capability (ARAC) system of Lawrence Livermore National Laboratory (LLNL). The MATHEW is a code for three-dimensional wind field analysis. Using observed data, it calculates the mass-consistent wind field of grid cells by a variational method. The ADPIC is a code for three-dimensional concentration prediction of gases and particulates released to the atmosphere. It calculates concentrations in grid cells by the particle-in-cell method. They are written in LLLTRAN, i.e., LLNL Fortran language and are implemented on the CDC 7600 computers of LLNL. In this report, i) the computational methods of the MATHEW/ADPIC and their auxiliary codes, ii) comparisons of the calculated results with our JAERI particle-in-cell, and gaussian plume models, iii) translation procedures from the CDC version to FACOM M-200's, are described. Under the permission of LLNL G-Division, this report is published to keep the track of the translation procedures and to serve our JAERI researchers for comparisons and references of their works. (author)

  4. Comparison of sodium aerosol codes

    International Nuclear Information System (INIS)

    Dunbar, I.H.; Fermandjian, J.; Bunz, H.; L'homme, A.; Lhiaubet, G.; Himeno, Y.; Kirby, C.R.; Mitsutsuka, N.

    1984-01-01

    Although hypothetical fast reactor accidents leading to severe core damage are very low probability events, their consequences are to be assessed. During such accidents, one can envisage the ejection of sodium, mixed with fuel and fission products, from the primary circuit into the secondary containment. Aerosols can be formed either by mechanical dispersion of the molten material or as a result of combustion of the sodium in the mixture. Therefore considerable effort has been devoted to study the different sodium aerosol phenomena. To ensure that the problems of describing the physical behaviour of sodium aerosols were adequately understood, a comparison of the codes being developed to describe their behaviour was undertaken. The comparison consists of two parts. The first is a comparative study of the computer codes used to predict aerosol behaviour during a hypothetical accident. It is a critical review of documentation available. The second part is an exercise in which code users have run their own codes with a pre-arranged input. For the critical comparative review of the computer models, documentation has been made available on the following codes: AEROSIM (UK), MAEROS (USA), HAARM-3 (USA), AEROSOLS/A2 (France), AEROSOLS/B1 (France), and PARDISEKO-IIIb (FRG)

  5. Advanced thermionic reactor systems design code

    International Nuclear Information System (INIS)

    Lewis, B.R.; Pawlowski, R.A.; Greek, K.J.; Klein, A.C.

    1991-01-01

    An overall systems design code is under development to model an advanced in-core thermionic nuclear reactor system for space applications at power levels of 10 to 50 kWe. The design code is written in an object-oriented programming environment that allows the use of a series of design modules, each of which is responsible for the determination of specific system parameters. The code modules include a neutronics and core criticality module, a core thermal hydraulics module, a thermionic fuel element performance module, a radiation shielding module, a module for waste heat transfer and rejection, and modules for power conditioning and control. The neutronics and core criticality module determines critical core size, core lifetime, and shutdown margins using the criticality calculation capability of the Monte Carlo Neutron and Photon Transport Code System (MCNP). The remaining modules utilize results of the MCNP analysis along with FORTRAN programming to predict the overall system performance

  6. Positive predictive value between medical-chart body-mass-index category and obesity versus codes in a claims-data warehouse.

    Science.gov (United States)

    Caplan, Eleanor O; Kamble, Pravin S; Harvey, Raymond A; Smolarz, B Gabriel; Renda, Andrew; Bouchard, Jonathan R; Huang, Joanna C

    2018-01-01

    To evaluate the positive predictive value of claims-based V85 codes for identifying individuals with varying degrees of BMI relative to their measured BMI obtained from medical record abstraction. This was a retrospective validation study utilizing administrative claims and medical chart data from 1 January 2009 to 31 August 2015. Randomly selected samples of patients enrolled in a Medicare Advantage Prescription Drug (MAPD) or commercial health plan and with a V85 claim were identified. The claims-based BMI category (underweight, normal weight, overweight, obese class I-III) was determined via corresponding V85 codes and compared to the BMI category derived from chart abstracted height, weight and/or BMI. The positive predictive values (PPVs) of the claims-based BMI categories were calculated with the corresponding 95% confidence intervals (CIs). The overall PPVs (95% CIs) in the MAPD and commercial samples were 90.3% (86.3%-94.4%) and 91.1% (87.3%-94.9%), respectively. In each BMI category, the PPVs (95% CIs) for the MAPD and commercial samples, respectively, were: underweight, 71.0% (55.0%-87.0%) and 75.9% (60.3%-91.4%); normal, 93.8% (85.4%-100%) and 87.8% (77.8%-97.8%); overweight, 97.4% (92.5%-100%) and 93.5% (84.9%-100%); obese class I, 96.9 (90.9%-100%) and 97.2% (91.9%-100%); obese class II, 97.0% (91.1%-100%) and 93.0% (85.4%-100%); and obese class III, 85.0% (73.3%-96.1%) and 97.1% (91.4%-100%). BMI categories derived from administrative claims, when available, can be used successfully particularly in the context of obesity research.

  7. SWAAM code development, verification and application to steam generator design

    International Nuclear Information System (INIS)

    Shin, Y.W.; Valentin, R.A.

    1990-01-01

    This paper describes the family of SWAAM codes developed by Argonne National Laboratory to analyze the effects of sodium/water reactions on LMR steam generators. The SWAAM codes were developed as design tools for analyzing various phenomena related to steam generator leaks and to predict the resulting thermal and hydraulic effects on the steam generator and the intermediate heat transport system (IHTS). The theoretical foundations and numerical treatments on which the codes are based are discussed, followed by a description of code capabilities and limitations, verification of the codes by comparison with experiment, and applications to steam generator and IHTS design. (author). 25 refs, 14 figs

  8. Error-Rate Bounds for Coded PPM on a Poisson Channel

    Science.gov (United States)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  9. Studies of fast reactor disassembly using a Bethe-Tait computer code

    International Nuclear Information System (INIS)

    Ludwig, J.C.

    1978-10-01

    The advantages of the fast reactor are given and the general design outlined. Loss of Flow and Transient Overpower faults are possible; the potential consequences of such incidents are analysed using a deterministic approach. The course of an incident is split into several stages; of these only predisassembly and disassembly are considered. Predisassembly computer codes are described in general, and several particular codes are examined in more detail, based on a literature survey. The results and implications of disassembly calculations using the code EXTRA are presented. Here, the effects of several factors, such as the presence of retained fission gases and possible restraints on fuel motion, are investigated. Some comparisons are made with published results from the VENUS-II disassembly code. A general conclusion is that under some circumstances, the yield predicted during disassembly is relatively insensitive to modelling assumptions, and a simple code such as EXTRA may prove adequate if explicit core displacements are not required. A major factor in determining the yield of the disassembly phase is confirmed as being the rate of reactivity insertion during disassembly, as predicted by a predisassembly code. (U.K.)

  10. Evaluation of Advanced Models for PAFS Condensation Heat Transfer in SPACE Code

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Byoung-Uhn; Kim, Seok; Park, Yu-Sun; Kang, Kyung Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Ahn, Tae-Hwan; Yun, Byong-Jo [Pusan National University, Busan (Korea, Republic of)

    2015-10-15

    The PAFS (Passive Auxiliary Feedwater System) is operated by the natural circulation to remove the core decay heat through the PCHX (Passive Condensation Heat Exchanger) which is composed of the nearly horizontal tubes. For validation of the cooling and operational performance of the PAFS, PASCAL (PAFS Condensing Heat Removal Assessment Loop) facility was constructed and the condensation heat transfer and natural convection phenomena in the PAFS was experimentally investigated at KAERI (Korea Atomic Energy Research Institute). From the PASCAL experimental result, it was found that conventional system analysis code underestimated the condensation heat transfer. In this study, advanced condensation heat transfer models which can treat the heat transfer mechanisms with the different flow regimes in the nearly horizontal heat exchanger tube were analyzed. The models were implemented in a thermal hydraulic safety analysis code, SPACE (Safety and Performance Analysis Code for Nuclear Power Plant), and it was evaluated with the PASCAL experimental data. With an aim of enhancing the prediction capability for the condensation phenomenon inside the PCHX tube of the PAFS, advanced models for the condensation heat transfer were implemented into the wall condensation model of the SPACE code, so that the PASCAL experimental result was utilized to validate the condensation models. Calculation results showed that the improved model for the condensation heat transfer coefficient enhanced the prediction capability of the SPACE code. This result confirms that the mechanistic modeling for the film condensation in the steam phase and the convection in the condensate liquid contributed to enhance the prediction capability of the wall condensation model of the SPACE code and reduce conservatism in prediction of condensation heat transfer.

  11. Development of code SFINEL (Spent fuel integrity evaluator)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yong Soo; Min, Chin Young; Ohk, Young Kil; Yang, Yong Sik; Kim, Dong Ju; Kim, Nam Ku [Hanyang University, Seoul (Korea)

    1999-01-01

    SFINEL code, an integrated computer program for predicting the spent fuel rod integrity based on burn-up history and major degradation mechanisms, has been developed through this project. This code can sufficiently simulate the power history of a fuel rod during the reactor operation and estimate the degree of deterioration of spent fuel cladding using the recently-developed models on the degradation mechanisms. SFINEL code has been thoroughly benchmarked against the collected in-pile data and operating experiences: deformation and rupture, and cladding oxidation, rod internal pressure creep, then comprehensive whole degradation process. (author). 75 refs., 51 figs., 5 tabs.

  12. HETFIS: High-Energy Nucleon-Meson Transport Code with Fission

    International Nuclear Information System (INIS)

    Barish, J.; Gabriel, T.A.; Alsmiller, F.S.; Alsmiller, R.G. Jr.

    1981-07-01

    A model that includes fission for predicting particle production spectra from medium-energy nucleon and pion collisions with nuclei (Z greater than or equal to 91) has been incorporated into the nucleon-meson transport code, HETC. This report is primarily concerned with the programming aspects of HETFIS (High-Energy Nucleon-Meson Transport Code with Fission). A description of the program data and instructions for operating the code are given. HETFIS is written in FORTRAN IV for the IBM computers and is readily adaptable to other systems

  13. The CFEST-INV stochastic hydrology code: Mathematical formulation, application, and user's manual

    International Nuclear Information System (INIS)

    Devary, J.L.

    1987-06-01

    Performance assessments of a nuclear waste repository must consider the hydrologic, thermal, mechanical, and geochemical environments of a candidate site. Predictions of radionuclide transport requires estimating water movement as a function of pressure, temperature, and solute concentration. CFEST (Coupled Fluid, Energy, and Solute Transport), is a finite-element based groundwater code that can be used to simultaneously solve the partial differential equations for pressure head, solute temperature, and solute concentration. The CFEST code has been designed to support site, repository, and waste package subsystem assessments. CFEST-INV is a stochastic hydrology code that was developed to augment the CFEST code in data processing; model calibration; performance prediction; error propagation; and data collection guidance. The CFEST-INV code utilizes kriging, finite-element modeling, adjoint-sensitivity, statistical-inverse, first-order variance, and Monte-Carlo techniques to develop performance (measure) driven data collection schemes and to determine the waste isolation capabilities (including uncertainties) of candidate repository sites. This report contains the basic physical and numerical principles of the CFEST-INV code, its input parameters, verification exercises, a user's manual, and the code's application history. 18 refs., 16 figs., 6 tabs

  14. Toward a CFD nose-to-tail capability - Hypersonic unsteady Navier-Stokes code validation

    Science.gov (United States)

    Edwards, Thomas A.; Flores, Jolen

    1989-01-01

    Computational fluid dynamics (CFD) research for hypersonic flows presents new problems in code validation because of the added complexity of the physical models. This paper surveys code validation procedures applicable to hypersonic flow models that include real gas effects. The current status of hypersonic CFD flow analysis is assessed with the Compressible Navier-Stokes (CNS) code as a case study. The methods of code validation discussed to beyond comparison with experimental data to include comparisons with other codes and formulations, component analyses, and estimation of numerical errors. Current results indicate that predicting hypersonic flows of perfect gases and equilibrium air are well in hand. Pressure, shock location, and integrated quantities are relatively easy to predict accurately, while surface quantities such as heat transfer are more sensitive to the solution procedure. Modeling transition to turbulence needs refinement, though preliminary results are promising.

  15. A robust fusion method for multiview distributed video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Ascenso, Joao; Brites, Catarina

    2014-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the redundancy of the source (video) at the decoder side, as opposed to predictive coding, where the encoder leverages the redundancy. To exploit the correlation between views, multiview predictive video codecs require the encoder...... with a robust fusion system able to improve the quality of the fused SI along the decoding process through a learning process using already decoded data. We shall here take the approach to fuse the estimated distributions of the SIs as opposed to a conventional fusion algorithm based on the fusion of pixel...... values. The proposed solution is able to achieve gains up to 0.9 dB in Bjøntegaard difference when compared with the best-performing (in a RD sense) single SI DVC decoder, chosen as the best of an inter-view and a temporal SI-based decoder one....

  16. Performance Prediction of Centrifugal Compressor for Drop-In Testing Using Low Global Warming Potential Alternative Refrigerants and Performance Test Codes

    Directory of Open Access Journals (Sweden)

    Joo Hoon Park

    2017-12-01

    Full Text Available As environmental regulations to stall global warming are strengthened around the world, studies using newly developed low global warming potential (GWP alternative refrigerants are increasing. In this study, substitute refrigerants, R-1234ze (E and R-1233zd (E, were used in the centrifugal compressor of an R-134a 2-stage centrifugal chiller with a fixed rotational speed. Performance predictions and thermodynamic analyses of the centrifugal compressor for drop-in testing were performed. A performance prediction method based on the existing ASME PTC-10 performance test code was proposed. The proposed method yielded the expected operating area and operating point of the centrifugal compressor with alternative refrigerants. The thermodynamic performance of the first and second stages of the centrifugal compressor was calculated as the polytropic state. To verify the suitability of the proposed method, the drop-in test results of the two alternative refrigerants were compared. The predicted operating range based on the permissible deviation of ASME PTC-10 confirmed that the temperature difference was very small at the same efficiency. Because the drop-in test of R-1234ze (E was performed within the expected operating range, the centrifugal compressor using R-1234ze (E is considered well predicted. However, the predictions of the operating point and operating range of R-1233zd (E were lower than those of the drop-in test. The proposed performance prediction method will assist in understanding thermodynamic performance at the expected operating point and operating area of a centrifugal compressor using alternative gases based on limited design and structure information.

  17. Subchannel analysis of a boiloff experiment by a system thermalhydraulic code

    International Nuclear Information System (INIS)

    Bousbia-Salah, A.; D'Auria, F.

    2001-01-01

    This paper presents the results of system thermalhydraulic code using the sub-channel analysis approach in predicting the Neptun boil off experiments. This approach will be suitable for further works in view of coupling the system code with a 3D neutron kinetic one. The boil off tests were conducted in order to simulate the consequences of loss of coolant inventory leading to uncovery and heat up of fuel elements of a nuclear reactor core. In this framework, the Neptun low pressure test No5002, which is a good repeat experiment, is considered. The calculations were carried out using the system transient analysis code Relap5/Mod3.2. A detailed nodalization of the Neptun test section was developed. A reference case was run, and the overall data comparison shows good agreement between calculated and experimental thermalhydraulic parameters. A series of sensitivity analyses were also performed in order to assess the code prediction capabilities. The obtained results were almost satisfactory, this demonstrates, as well, the reasonable success of the subchannel analysis approach adopted in the present context for a system thermalhydraulic code.(author)

  18. Development and validation of computer codes for analysis of PHWR containment behaviour

    International Nuclear Information System (INIS)

    Markandeya, S.G.; Haware, S.K.; Ghosh, A.K.; Venkat Raj, V.

    1997-01-01

    In order to ensure that the design intent of the containment of Indian Pressurised Heavy Water Reactors (IPHWRs) is met, both analytical and experimental studies are being pursued at BARC. As a part of analytical studies, computer codes for predicting the behaviour of containment under various accident scenarios are developed/adapted. These include codes for predicting 1) pressure, temperature transients in the containment following either Loss of Coolant Accident (LOCA) or Main Steam Line Break (MSLB), 2) hydrogen behaviour in respect of its distribution, combustion and the performance of proposed mitigation systems, and 3) behaviour of fission product aerosols in the piping circuits of the primary heat transport system and in the containment. All these codes have undergone thorough validation using data obtained from in-house test facilities or from international sources. Participation in the International Standard Problem (ISP) exercises has also helped in validation of the codes. The present paper briefly describes some of these codes and the various exercises performed for their validation. (author)

  19. Assessment of TRAC-PF1/MOD1 code for large break LOCA in PWR

    International Nuclear Information System (INIS)

    Akimoto, Hajime; Ohnuki, Akira; Murao, Yoshio; Abe, Yutaka.

    1993-03-01

    As the first step of the REFLA/TRAC code development, the TRAC/PF1/MOD1 code has been assessed for various experiments that simulate postulated large-break loss-of-coolant accident (LBLOCA) in PWR to understand the predictive capability and to identify the problem areas of the code. The assessment calculations were performed for separate effect tests for critical flow, counter current flow, condensation at cold leg and reflood as well as integral tests to understand predictability for individual phenomena. This report summarizes results from the assessment calculations of the TRAC-PF1/MOD1 code for LBLOCA in PWR. The assessment calculations made clear the predictive capability and problem areas of the TRAC-PF1/MOD1 code for LBLOCA in PWR. The areas, listed below, should be improved for more realistic and effective simulation of LBLOCA in PWR: (1) core heat transfer model during blowdown, (2) ECC bypass model at downcomer during refill, (3) condensation model during accumulator injection, and (4) core thermal hydraulic model during reflood. (author) 57 refs

  20. Application of advanced validation concepts to oxide fuel performance codes: LIFE-4 fast-reactor and FRAPCON thermal-reactor fuel performance codes

    Energy Technology Data Exchange (ETDEWEB)

    Unal, C., E-mail: cu@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663, Los Alamos, NM 87545 (United States); Williams, B.J. [Los Alamos National Laboratory, P.O. Box 1663, Los Alamos, NM 87545 (United States); Yacout, A. [Argonne National Laboratory, 9700 S. Cass Avenue, Lemont, IL 60439 (United States); Higdon, D.M. [Los Alamos National Laboratory, P.O. Box 1663, Los Alamos, NM 87545 (United States)

    2013-10-15

    Highlights: ► The application of advanced validation techniques (sensitivity, calibration and prediction) to nuclear performance codes FRAPCON and LIFE-4 is the focus of the paper. ► A sensitivity ranking methodology narrows down the number of selected modeling parameters from 61 to 24 for FRAPCON and from 69 to 35 for LIFE-4. ► Fuel creep, fuel thermal conductivity, fission gas transport/release, crack/boundary, and fuel gap conductivity models of LIFE-4 are identified for improvements. ► FRAPCON sensitivity results indicated the importance of the fuel thermal conduction and the fission gas release models. -- Abstract: Evolving nuclear energy programs expect to use enhanced modeling and simulation (M and S) capabilities, using multiscale, multiphysics modeling approaches, to reduce both cost and time from the design through the licensing phases. Interest in the development of the multiscale, multiphysics approach has increased in the last decade because of the need for predictive tools for complex interacting processes as a means of eliminating the limited use of empirically based model development. Complex interacting processes cannot be predicted by analyzing each individual component in isolation. In most cases, the mathematical models of complex processes and their boundary conditions are nonlinear. As a result, the solutions of these mathematical models often require high-performance computing capabilities and resources. The use of multiscale, multiphysics (MS/MP) models in conjunction with high-performance computational software and hardware introduces challenges in validating these predictive tools—traditional methodologies will have to be modified to address these challenges. The advanced MS/MP codes for nuclear fuels and reactors are being developed within the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program of the US Department of Energy (DOE) – Nuclear Energy (NE). This paper does not directly address challenges in calibration

  1. Results of neutronic benchmark analysis for a high temperature reactor of the GT-MHR type - HTR2008-58107

    International Nuclear Information System (INIS)

    Boyarinov, V. F.; Bryzgalov, V. I.; Davidenko, V. D.; Fomichenko, P. A.; Glushkov, E. S.; Gomin, E. A.; Gurevich, M. I.; Kodochigov, N. G.; Marova, E. V.; Mitenkova, E. F.; Novikov, N. V.; Osipov, S. L.; Sukharev, Y. P.; Tsibulsky, V. F.; Yudkevich, M. S.

    2008-01-01

    The paper presents a description of benchmark cases, achieved results, analysis of possible reasons of differences of calculation results obtained by various neutronic codes. The comparative analysis is presented showing the benchmark-results obtained with reference and design codes by Russian specialists (WIMS-D, JAR-HTGR, UNK, MCU, MCNP5-MONTEBURNS1.0-ORIGEN2.0), by French specialists (AP0LL02, TRIP0LI4 codes), and by Korean specialists (HELIOS, MASTER, MCNP5 codes). The analysis of possible reasons for deviations was carried out, which was aimed at the decrease of uncertainties in calculated characteristics. This additional investigation was conducted with the use of 2D models of a fuel assembly cell and a reactor plane section. (authors)

  2. Development of REFLA/TRAC code for engineering work station

    International Nuclear Information System (INIS)

    Ohnuki, Akira; Akimoto, Hajime; Murao, Yoshio

    1994-03-01

    The REFLA/TRAC code is a best-estimate code which is expected to check reactor safety analysis codes for light water reactors (LWRs) and to perform accident analyses for LWRs and also for an advanced LWR. Therefore, a high predictive capability is required and the assessment of each physical model becomes important because the models govern the predictive capability. In the case of the assessment of three-dimensional models in REFLA/TRAC code, a conventional large computer is being used and it is difficult to perform the assessment efficiently because the turnaround time for the calculation and the analysis is long. Then, a REFLA/TRAC code which can run on an engineering work station (EWS) was developed. Calculational speed of the current EWS is the same order as that of large computers and the EWS has an excellent function for multidimensional graphical drawings. Besides, the plotting processors for X-Y drawing and for two-dimensional graphical drawing were developed in order to perform efficient analyses for three-dimensional calculations. In future, we can expect that the assessment of three-dimensional models becomes more efficient by introducing an EWS with higher calculational speed and with improved graphical drawings. In this report, each outline for the following three programs is described: (1) EWS version of REFLA/TRAC code, (2) Plot processor for X-Y drawing and (3) Plot processor for two-dimensional graphical drawing. (author)

  3. Application of the MELCOR code to design basis PWR large dry containment analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, Jesse; Notafrancesco, Allen (USNRC, Office of Nuclear Regulatory Research, Rockville, MD); Tills, Jack Lee (Jack Tills & Associates, Inc., Sandia Park, NM)

    2009-05-01

    The MELCOR computer code has been developed by Sandia National Laboratories under USNRC sponsorship to provide capability for independently auditing analyses submitted by reactor manufactures and utilities. MELCOR is a fully integrated code (encompassing the reactor coolant system and the containment building) that models the progression of postulated accidents in light water reactor power plants. To assess the adequacy of containment thermal-hydraulic modeling incorporated in the MELCOR code for application to PWR large dry containments, several selected demonstration designs were analyzed. This report documents MELCOR code demonstration calculations performed for postulated design basis accident (DBA) analysis (LOCA and MSLB) inside containment, which are compared to other code results. The key processes when analyzing the containment loads inside PWR large dry containments are (1) expansion and transport of high mass/energy releases, (2) heat and mass transfer to structural passive heat sinks, and (3) containment pressure reduction due to engineered safety features. A code-to-code benchmarking for DBA events showed that MELCOR predictions of maximum containment loads were equivalent to similar predictions using a qualified containment code known as CONTAIN. This equivalency was found to apply for both single- and multi-cell containment models.

  4. Status of the CONTAIN computer code for LWR containment analysis

    International Nuclear Information System (INIS)

    Bergeron, K.D.; Murata, K.K.; Rexroth, P.E.; Clauser, M.J.; Senglaub, M.E.; Sciacca, F.W.; Trebilcock, W.

    1983-01-01

    The current status of the CONTAIN code for LWR safety analysis is reviewed. Three example calculations are discussed as illustrations of the code's capabilities: (1) a demonstration of the spray model in a realistic PWR problem, and a comparison with CONTEMPT results; (2) a comparison of CONTAIN results for a major aerosol experiment against experimental results and predictions of the HAARM aerosol code; and (3) an LWR sample problem, involving a TMLB' sequence for the Zion reactor containment

  5. Status of the CONTAIN computer code for LWR containment analysis

    International Nuclear Information System (INIS)

    Bergeron, K.D.; Murata, K.K.; Rexroth, P.E.; Clauser, M.J.; Senglaub, M.E.; Sciacca, F.W.; Trebilcock, W.

    1982-01-01

    The current status of the CONTAIN code for LWR safety analysis is reviewed. Three example calculations are discussed as illustrations of the code's capabilities: (1) a demonstration of the spray model in a realistic PWR problem, and a comparison with CONTEMPT results; (2) a comparison of CONTAIN results for a major aerosol experiment against experimental results and predictions of the HAARM aerosol code; and (3) an LWR sample problem, involving a TMLB' sequence for the Zion reactor containment

  6. OECD International Standard Problem number 34. Falcon code comparison report

    International Nuclear Information System (INIS)

    Williams, D.A.

    1994-12-01

    ISP-34 is the first ISP to address fission product transport issues and has been strongly supported by a large number of different countries and organisations. The ISP is based on two experiments, FAL-ISP-1 and FAL-ISP-2, which were conducted in AEA's Falcon facility. Specific features of the experiments include quantification of chemical effects and aerosol behaviour. In particular, multi-component aerosol effects and vapour-aerosol interactions can all be investigated in the Falcon facility. Important parameters for participants to predict were the deposition profiles and composition, key chemical species and reactions, evolution of suspended material concentrations, and the effects of steam condensation onto aerosols and particle hygroscopicity. The results of the Falcon ISP support the belief that aerosol physics is generally well modelled in primary circuit codes, but the chemistry models in many of the codes need to be improved, since chemical speciation is one of the main factors which controls transport and deposition behaviour. The importance of chemical speciation, aerosol nucleation, and the role of multi-component aerosols in determining transport and deposition behaviour are evident. The role of re-vaporization in these Falcon experiments is not clear; it is not possible to compare those codes which predicted re-vaporization with quantitative data. The evidence from this ISP exercise indicates that the containment codes can predict thermal-hydraulics conditions satisfactorily. However, the differences in the predicted aerosol locations in the Falcon tests had shown that aerosol behaviour was very susceptible to parameters such as particle size distribution

  7. Optimising Boltzmann codes for the PLANCK era

    International Nuclear Information System (INIS)

    Hamann, Jan; Lesgourgues, Julien; Balbi, Amedeo; Quercellini, Claudia

    2009-01-01

    High precision measurements of the Cosmic Microwave Background (CMB) anisotropies, as can be expected from the PLANCK satellite, will require high-accuracy theoretical predictions as well. One possible source of theoretical uncertainty is the numerical error in the output of the Boltzmann codes used to calculate angular power spectra. In this work, we carry out an extensive study of the numerical accuracy of the public Boltzmann code CAMB, and identify a set of parameters which determine the error of its output. We show that at the current default settings, the cosmological parameters extracted from data of future experiments like Planck can be biased by several tenths of a standard deviation for the six parameters of the standard ΛCDM model, and potentially more seriously for extended models. We perform an optimisation procedure that leads the code to achieve sufficient precision while at the same time keeping the computation time within reasonable limits. Our conclusion is that the contribution of numerical errors to the theoretical uncertainty of model predictions is well under control—the main challenges for more accurate calculations of CMB spectra will be of an astrophysical nature instead

  8. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  9. Implementation of JAERI's reflood model into TRAC-PF1/MOD1 code

    International Nuclear Information System (INIS)

    Akimoto, Hajime; Ohnuki, Akira; Murao, Yoshio

    1993-02-01

    Selected physical models of REFLA code, that is a reflood analysis code developed at JAERI, were implemented into the TRAC-PF1/MOD1 code in order to improve the predictive capability of the TRAC-PF1/MOD1 code for the core thermal hydraulic behaviors during the reflood phase in a PWR LOCA. Through comparisons of physical models between both codes, (1) Murao-Iguchi void fraction correlation, (2) the drag coefficient correlation acting to drops, (3) the correlation for wall heat transfer coefficient in the film boiling regime, (4) the quench velocity correlation and (5) heat transfer correlations for the dispersed flow regime were selected from the REFLA code to be implemented into the TRAC-PF1/MOD1 code. A method for the transformation of the void fraction correlation to the equivalent interfacial friction model was developed and the effect of the transformation method on the stability of the solution was discussed. Through assessment calculation using data from CCTF (Cylindrical Core Test Facility) flat power test, it was confirmed that the predictive capability of the TRAC code for the core thermal hydraulic behaviors during the reflood can be improved by the implementation of selected physical models of the REFLA code. Several user guidelines for the modified TRAC code were proposed based on the sensitivity studies on fluid cell number in the hydraulic calculation and on node number and effect of axial heat conduction in the heat conduction calculation of fuel rod. (author)

  10. Analyses and computer code developments for accident-induced thermohydraulic transients in water-cooled nuclear reactor systems

    International Nuclear Information System (INIS)

    Wulff, W.

    1977-01-01

    A review is presented on the development of analyses and computer codes for the prediction of thermohydraulic transients in nuclear reactor systems. Models for the dynamics of two-phase mixtures are summarized. Principles of process, reactor component and reactor system modeling are presented, as well as the verification of these models by comparing predicted results with experimental data. Codes of major importance are described, which have recently been developed or are presently under development. The characteristics of these codes are presented in terms of governing equations, solution techniques and code structure. Current efforts and problems of code verification are discussed. A summary is presented of advances which are necessary for reducing the conservatism currently implied in reactor hydraulics codes for safety assessment

  11. The Barrier code for predicting long-term concrete performance

    International Nuclear Information System (INIS)

    Shuman, R.; Rogers, V.C.; Shaw, R.A.

    1989-01-01

    There are numerous features incorporated into a LLW disposal facility that deal directly with critical safety objectives required by the NRC in 10 CFR 61. Engineered barriers or structures incorporating concrete are commonly being considered for waste disposal facilities. The Barrier computer code calculates the long-term degradation of concrete structures in LLW disposal facilities. It couples this degradation with water infiltration into the facility, nuclide leaching from the waste, contaminated water release from the facility, and associated doses to members of the critical population group. The concrete degradation methodology of Barrier is described

  12. Thermal-hydraulic analysis of water cooled breeding blanket of K-DEMO using MARS-KS code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jeong-Hun; Park, Il Woong; Kim, Geon-Woo; Park, Goon-Cherl [Seoul National University, Seoul (Korea, Republic of); Cho, Hyoung-Kyu, E-mail: chohk@snu.ac.kr [Seoul National University, Seoul (Korea, Republic of); Im, Kihak [National Fusion Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    Highlights: • The thermal design of breeding blanket for the K-DEMO is evaluated using MARS-KS. • To confirm the prediction capability of MARS, the results were compared with the CFD. • The results of MARS-KS calculation and CFD prediction are in good agreement. • A transient simulation was carried out so as to show the applicability of MARS-KS. • A methodology to simulate the entire blanket system is proposed. - Abstract: The thermal design of a breeding blanket for the Korean Fusion DEMOnstration reactor (K-DEMO) is evaluated using the Multidimensional Analysis of Reactor Safety (MARS-KS) code in this study. The MARS-KS code has advantages in simulating transient two-phase flow over computational fluid dynamics (CFD) codes. In order to confirm the prediction capability of the code for the present blanket system, the calculation results were compared with the CFD prediction. The results of MARS-KS calculation and CFD prediction are in good agreement. Afterwards, a transient simulation for a conceptual problem was carried out so as to show the applicability of MARS-KS for a transient or accident condition. Finally, a methodology to simulate the multiple blanket modules is proposed.

  13. Hydrogen burn assessment with the CONTAIN code

    International Nuclear Information System (INIS)

    van Rij, H.M.

    1986-01-01

    The CONTAIN computer code was developed at Sandia National Laboratories, under contract to the US Nuclear Regulatory Commission (NRC). The code is intended for calculations of containment loads during severe accidents and for prediction of the radioactive source term in the event that the containment leaks or fails. A strong point of the CONTAIN code is the continuous interaction of the thermal-hydraulics phenomena, aerosol behavior and fission product behavior. The CONTAIN code can be used for Light Water Reactors as well as Liquid Metal Reactors. In order to evaluate the CONTAIN code on its merits, comparisons between the code and experiments must be made. In this paper, CONTAIN calculations for the hydrogen burn experiments, carried out at the Nevada Test Site (NTS), are presented and compared with the experimental data. In the Large-Scale Hydrogen Combustion Facility at the NTS, 21 tests have been carried out. These tests were sponsored by the NRC and the Electric Power Research Institute (EPRI). The tests, carried out by EG and G, were performed in a spherical vessel 16 m in diameter with a design pressure of 700 kPa, substantially higher than that of most commercial nuclear containment buildings

  14. Parallelization of 2-D lattice Boltzmann codes

    International Nuclear Information System (INIS)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo.

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author)

  15. Parallelization of 2-D lattice Boltzmann codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author).

  16. Identifying Psoriasis and Psoriatic Arthritis Patients in Retrospective Databases When Diagnosis Codes Are Not Available: A Validation Study Comparing Medication/Prescriber Visit-Based Algorithms with Diagnosis Codes.

    Science.gov (United States)

    Dobson-Belaire, Wendy; Goodfield, Jason; Borrelli, Richard; Liu, Fei Fei; Khan, Zeba M

    2018-01-01

    Using diagnosis code-based algorithms is the primary method of identifying patient cohorts for retrospective studies; nevertheless, many databases lack reliable diagnosis code information. To develop precise algorithms based on medication claims/prescriber visits (MCs/PVs) to identify psoriasis (PsO) patients and psoriatic patients with arthritic conditions (PsO-AC), a proxy for psoriatic arthritis, in Canadian databases lacking diagnosis codes. Algorithms were developed using medications with narrow indication profiles in combination with prescriber specialty to define PsO and PsO-AC. For a 3-year study period from July 1, 2009, algorithms were validated using the PharMetrics Plus database, which contains both adjudicated medication claims and diagnosis codes. Positive predictive value (PPV), negative predictive value (NPV), sensitivity, and specificity of the developed algorithms were assessed using diagnosis code as the reference standard. Chosen algorithms were then applied to Canadian drug databases to profile the algorithm-identified PsO and PsO-AC cohorts. In the selected database, 183,328 patients were identified for validation. The highest PPVs for PsO (85%) and PsO-AC (65%) occurred when a predictive algorithm of two or more MCs/PVs was compared with the reference standard of one or more diagnosis codes. NPV and specificity were high (99%-100%), whereas sensitivity was low (≤30%). Reducing the number of MCs/PVs or increasing diagnosis claims decreased the algorithms' PPVs. We have developed an MC/PV-based algorithm to identify PsO patients with a high degree of accuracy, but accuracy for PsO-AC requires further investigation. Such methods allow researchers to conduct retrospective studies in databases in which diagnosis codes are absent. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  17. Assessment on the characteristics of the analysis code for KALIMER PSDRS

    Energy Technology Data Exchange (ETDEWEB)

    Eoh, Jae Hyuk; Sim, Yoon Sub; Kim, Seong O.; Kim, Yeon Sik; Kim, Eui Kwang; Wi, Myung Hwan [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2002-04-01

    The PARS2 code was developed to analyze the RHR(Residual Heat Removal) system, especially PSDRS(Passive Safety Decay Heat Removal System), of KALIMER. In this report, preliminary verification and sensitivity analyses for PARS2 code were performed. From the results of the analyses, the PARS2 code has a good agreement with the experimental data of CRIEPI in the range of turbulent airside flow, and also the radiation heat transfer mode was well predicted. In this verification work, it was founded that the code calculation stopped in a very low air flowrate, and the numerical scheme related to the convergence of PARS2 code was adjusted to solve this problem. Through the sensitivity analysis on the PARS2 calculation results from the change of the input parameters, the pool-mixing coefficient related to the heat capacity of the structure in the system was improved such that the physical phenomenon can be well predicted. Also the initial conditions for the code calculation such as the hot and cold pool temperatures at the PSDRS commencing time were set up by using the transient analysis of the COMMIX code, and the surface emissivity of PSDRS was investigated and its permitted variation rage was set up. From this study, overall sensitivity characteristics of the PARS2 code were investigated and the results of the sensitivity analyses can be used in the design of the RHR system of KALIMER. 14 refs., 28 figs., 2 tabs. (Author)

  18. Daddy, where did (PS)I come from?

    Science.gov (United States)

    Baymann, F; Brugna, M; Mühlenhoff, U; Nitschke, W

    2001-10-30

    The reacton centre I (RCI)-type photosystems from plants, cyano-, helio- and green sulphur bacteria are compared and the essential properties of an archetypal RCI are deduced. Species containing RCI-type photosystems most probably cluster together on a common branch of the phylogenetic tree. The predicted branching order is green sulphur, helio- and cyanobacteria. Striking similarities between RCI- and RCII-type photosystems recently became apparent in the three-dimensional structures of photosystem I (PSI), PSII and RCII. The phylogenetic relationship between all presently known photosystems is analysed suggesting (a) RCI as the ancestral photosystem and (b) the descendence of PSII from RCI via gene duplication and gene splitting. An evolutionary model trying to rationalise available data is presented.

  19. Construction of Protograph LDPC Codes with Linear Minimum Distance

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  20. Improvement of MARS code reflood model

    International Nuclear Information System (INIS)

    Hwang, Moonkyu; Chung, Bub-Dong

    2011-01-01

    A specifically designed heat transfer model for the reflood process which normally occurs at low flow and low pressure was originally incorporated in the MARS code. The model is essentially identical to that of the RELAP5/MOD3.3 code. The model, however, is known to have under-estimated the peak cladding temperature (PCT) with earlier turn-over. In this study, the original MARS code reflood model is improved. Based on the extensive sensitivity studies for both hydraulic and wall heat transfer models, it is found that the dispersed flow film boiling (DFFB) wall heat transfer is the most influential process determining the PCT, whereas the interfacial drag model most affects the quenching time through the liquid carryover phenomenon. The model proposed by Bajorek and Young is incorporated for the DFFB wall heat transfer. Both space grid and droplet enhancement models are incorporated. Inverted annular film boiling (IAFB) is modeled by using the original PSI model of the code. The flow transition between the DFFB and IABF, is modeled using the TRACE code interpolation. A gas velocity threshold is also added to limit the top-down quenching effect. Assessment calculations are performed for the original and modified MARS codes for the Flecht-Seaset test and RBHT test. Improvements are observed in terms of the PCT and quenching time predictions in the Flecht-Seaset assessment. In case of the RBHT assessment, the improvement over the original MARS code is found marginal. A space grid effect, however, is clearly seen from the modified version of the MARS code. (author)

  1. RELAP5/MOD2 code assessment using a LOFT L2-3 loss of coolant experiment

    International Nuclear Information System (INIS)

    Bang, Young Seok; Chung, Bub Dong; Kim, Hho Jung

    1990-01-01

    The LOFT LOCE L2-3 was simulated using the RELAP5/MOD2 Cycle 36.04 code to assess its capability in predicting the thermal-hydraulic phenomena in LBLOCA of the PWR. The reactor vessel was simulated with two core channels and split downcomer modeling for a base case calculation using the frozen code. The result of the base calculation showed that the code predicted the hydraulic behavior, and the blowdown thermal response at high power region of the core in a reasonable range and that the code had deficiencies in the critical flow model during subcooled-two-phase transition period, in the CHF correlation at high mass flux and in the blowdown rewet criteria. An overprediction of coolant inventory due to the deficiencies yielded the poor prediction of reflood thermal response. A Sensitivity calculation with an updated version from RELAP5/MOD2 Cycle 36.04 improved the prediction of the rewet phenomena

  2. IRIS core criticality calculations

    International Nuclear Information System (INIS)

    Jecmenica, R.; Trontl, K.; Pevec, D.; Grgic, D.

    2003-01-01

    Three-dimensional Monte Carlo computer code KENO-VI of CSAS26 sequence of SCALE-4.4 code system was applied for pin-by-pin calculations of the effective multiplication factor for the first cycle IRIS reactor core. The effective multiplication factors obtained by the above mentioned Monte Carlo calculations using 27-group ENDF/B-IV library and 238-group ENDF/B-V library have been compared with the effective multiplication factors achieved by HELIOS/NESTLE, CASMO/SIMULATE, and modified CORD-2 nodal calculations. The results of Monte Carlo calculations are found to be in good agreement with the results obtained by the nodal codes. The discrepancies in effective multiplication factor are typically within 1%. (author)

  3. Update and evaluation of decay data for spent nuclear fuel analyses

    Science.gov (United States)

    Simeonov, Teodosi; Wemple, Charles

    2017-09-01

    Studsvik's approach to spent nuclear fuel analyses combines isotopic concentrations and multi-group cross-sections, calculated by the CASMO5 or HELIOS2 lattice transport codes, with core irradiation history data from the SIMULATE5 reactor core simulator and tabulated isotopic decay data. These data sources are used and processed by the code SNF to predict spent nuclear fuel characteristics. Recent advances in the generation procedure for the SNF decay data are presented. The SNF decay data includes basic data, such as decay constants, atomic masses and nuclide transmutation chains; radiation emission spectra for photons from radioactive decay, alpha-n reactions, bremsstrahlung, and spontaneous fission, electrons and alpha particles from radioactive decay, and neutrons from radioactive decay, spontaneous fission, and alpha-n reactions; decay heat production; and electro-atomic interaction data for bremsstrahlung production. These data are compiled from fundamental (ENDF, ENSDF, TENDL) and processed (ESTAR) sources for nearly 3700 nuclides. A rigorous evaluation procedure of internal consistency checks and comparisons to measurements and benchmarks, and code-to-code verifications is performed at the individual isotope level and using integral characteristics on a fuel assembly level (e.g., decay heat, radioactivity, neutron and gamma sources). Significant challenges are presented by the scope and complexity of the data processing, a dearth of relevant detailed measurements, and reliance on theoretical models for some data.

  4. Update and evaluation of decay data for spent nuclear fuel analyses

    Directory of Open Access Journals (Sweden)

    Simeonov Teodosi

    2017-01-01

    Full Text Available Studsvik’s approach to spent nuclear fuel analyses combines isotopic concentrations and multi-group cross-sections, calculated by the CASMO5 or HELIOS2 lattice transport codes, with core irradiation history data from the SIMULATE5 reactor core simulator and tabulated isotopic decay data. These data sources are used and processed by the code SNF to predict spent nuclear fuel characteristics. Recent advances in the generation procedure for the SNF decay data are presented. The SNF decay data includes basic data, such as decay constants, atomic masses and nuclide transmutation chains; radiation emission spectra for photons from radioactive decay, alpha-n reactions, bremsstrahlung, and spontaneous fission, electrons and alpha particles from radioactive decay, and neutrons from radioactive decay, spontaneous fission, and alpha-n reactions; decay heat production; and electro-atomic interaction data for bremsstrahlung production. These data are compiled from fundamental (ENDF, ENSDF, TENDL and processed (ESTAR sources for nearly 3700 nuclides. A rigorous evaluation procedure of internal consistency checks and comparisons to measurements and benchmarks, and code-to-code verifications is performed at the individual isotope level and using integral characteristics on a fuel assembly level (e.g., decay heat, radioactivity, neutron and gamma sources. Significant challenges are presented by the scope and complexity of the data processing, a dearth of relevant detailed measurements, and reliance on theoretical models for some data.

  5. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    Science.gov (United States)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  6. Capability of the RELAP5 code to simulate natural circulation behaviour in test facilities

    International Nuclear Information System (INIS)

    Mangal, Amit; Jain, Vikas; Nayak, A.K.

    2011-01-01

    In the present study, one of the extensively used best estimate code RELAP5 has been used for simulation of steady state, transient and stability behavior of natural circulation based experimental facilities, such as the High-Pressure Natural Circulation Loop (HPNCL) and the Parallel Channel Loop (PCL) installed and operating at BARC. The test data have been generated for a range of pressure, power and subcooling conditions. The computer code RELAP5/MOD3.2 was applied to predict the transient natural circulation characteristics under single-phase and two-phase conditions, thresholds of flow instability, amplitude and frequency of flow oscillations for different operating conditions of the loops. This paper presents the effect of nodalisation in prediction of natural circulation behavior in test facilities and a comparison of experimental data in with that of code predictions. The errors associated with the predictions are also characterized

  7. Code-B-1 for stress/strain calculation for TRISO fuel particle (Contract research)

    International Nuclear Information System (INIS)

    Aihara, Jun; Ueta, Shohei; Shibata, Taiju; Sawa, Kazuhiro

    2011-12-01

    We have developed Code-B-1 for the prediction of the failure probabilities of the coated fuel particles for the high temperature gas-cooled reactors (HTGRs) under operation by modification of an existing code. A finite element method (FEM) is employed for the stress calculation part and Code-B-1 can treat the plastic deformation of the coating layer of the coated fuel particles which the existing code cannot treat. (author)

  8. Retrieval Analysis of the Emission Spectrum of WASP-12b: Sensitivity of Outcomes to Prior Assumptions and Implications for Formation History

    Energy Technology Data Exchange (ETDEWEB)

    Oreshenko, Maria; Lavie, Baptiste; Grimm, Simon L.; Tsai, Shang-Min; Malik, Matej; Demory, Brice-Olivier; Mordasini, Christoph; Alibert, Yann; Benz, Willy; Heng, Kevin [University of Bern, Center for Space and Habitability, Gesellschaftsstrasse 6, CH-3012 Bern (Switzerland); Quanz, Sascha P. [ETH Zürich, Department of Physics, Wolfgang-Pauli-Strasse 27, CH-8093 Zürich (Switzerland); Trotta, Roberto, E-mail: maria.oreshenko@csh.unibe.ch, E-mail: kevin.heng@csh.unibe.ch [Astrophysics Group, Imperial College London, Blackett Laboratory, Prince Consort Road, London SW7 2AZ (United Kingdom)

    2017-09-20

    We analyze the emission spectrum of the hot Jupiter WASP-12b using our HELIOS-R retrieval code and HELIOS-K opacity calculator. When interpreting Hubble and Spitzer data, the retrieval outcomes are found to be prior-dominated. When the prior distributions of the molecular abundances are assumed to be log-uniform, the volume mixing ratio of HCN is found to be implausibly high. A VULCAN chemical kinetics model of WASP-12b suggests that chemical equilibrium is a reasonable assumption even when atmospheric mixing is implausibly rigorous. Guided by (exo)planet formation theory, we set Gaussian priors on the elemental abundances of carbon, oxygen, and nitrogen with the Gaussian peaks being centered on the measured C/H, O/H, and N/H values of the star. By enforcing chemical equilibrium, we find substellar O/H and stellar to slightly superstellar C/H for the dayside atmosphere of WASP-12b. The superstellar carbon-to-oxygen ratio is just above unity, regardless of whether clouds are included in the retrieval analysis, consistent with Madhusudhan et al. Furthermore, whether a temperature inversion exists in the atmosphere depends on one’s assumption for the Gaussian width of the priors. Our retrieved posterior distributions are consistent with the formation of WASP-12b in a solar-composition protoplanetary disk, beyond the water iceline, via gravitational instability or pebble accretion (without core erosion) and migration inward to its present orbital location via a disk-free mechanism, and are inconsistent with both in situ formation and core accretion with disk migration, as predicted by Madhusudhan et al. We predict that the interpretation of James Webb Space Telescope WASP-12b data will not be prior-dominated.

  9. Linking the plasma code EDGE2D to the neutral code NIMBUS for a self consistent transport model of the boundary

    International Nuclear Information System (INIS)

    De Matteis, A.

    1987-01-01

    This report describes the fully automatic linkage between the finite difference, two-dimensional code EDGE2D, based on the classical Braginskii partial differential equations of ion transport, and the Monte Carlo code NIMBUS, which solves the integral form of the stationary, linear Boltzmann equation for neutral transport in a plasma. The coupling has been performed for the real poloidal geometry of JET with two belt-limiters and real magnetic configurations with or without a single-null point. The new integrated system starts from the magnetic geometry computed by predictive or interpretative equilibrium codes and yields the plasma and neutrals characteristics in the edge

  10. A study on Prediction of Radioactive Source-term from the Decommissioning of Domestic NPPs by using CRUDTRAN Code

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jong Soon; Lee, Sang Heon; Cho, Hoon Jo [Department of Nuclear Engineering Chosun University, Gwangju (Korea, Republic of)

    2016-10-15

    For the study, the behavior mechanism of corrosion products in the primary system of the Kori no.1 was analyzed, and the volume of activated corrosion products in the primary system was assessed based on domestic plant data with the CRUDTRAN code used to predict the volume. It is expected that the study would be utilized in predicting radiation exposure of workers performing maintenance and repairs in high radiation areas and in selecting the process of decontaminations and decommissioning in the primary system. It is also expected that in the future it would be used as the baseline data to estimate the volume of radioactive wastes when decommissioning a nuclear plant in the future, which would be an important criterion in setting the level of radioactive wastes used to compute the quantity of radioactive wastes. The results of prediction of the radioactive nuclide inventory in the primary system performed in this study would be used as baseline data for the estimation of the volume of radioactive wastes when decommissioning NPPs in the future. It is also expected that the data would be important criteria used to classify the level of radioactive wastes to calculate the volume. In addition, it is expected that the data would be utilized in reducing radiation exposure of workers in charge of system maintenance and repairing in high radiation zones and also predicting the selection of decontaminations and decommissioning processes in the primary systems. In future researches, it is planned to conduct the source term assessment against other NPP types such as CANDU and OPR-1000, in addition to the Westinghouse type nuclear plants.

  11. PHEBUS FP release analysis using a microstructure-based code

    International Nuclear Information System (INIS)

    Carlucci, L.N.

    1992-03-01

    The results of pre-test fission-product (FP) release analyses of the first two PHEBUS FP experiments, FPT0 and FPT1, indicate that the FREEDOM microstructure-based code predicts significant differences in both the timing and percent of gaseous FP releases for the two tests. To provide an indication of its predictive capability, FREEDOM was also used to model the high-burnup fuel tested in the Oak Ridge National Laboratory experiments VI-2 and VI-3. For these, the code was found to overpredict releases during the early stages of the tests and to underpredict releases during the later stages. The release kinetics in both tests were reasonably predicted, however. In view of the above, it is likely that the FREEDOM predictions of the final cumulative releases for the first two PHEBUS FP tests are lower-bound estimates. However, the significant difference in the predicted timing of initial releases for the two tests is felt to be indicative of what will occur. Therefore, this difference should be considered in the planning and conduct of the two tests, particularly aspects related to on-line measurements

  12. Fusion safety codes International modeling with MELCOR and ATHENA- INTRA

    CERN Document Server

    Marshall, T; Topilski, L; Merrill, B

    2002-01-01

    For a number of years, the world fusion safety community has been involved in benchmarking their safety analyses codes against experiment data to support regulatory approval of a next step fusion device. This paper discusses the benchmarking of two prominent fusion safety thermal-hydraulic computer codes. The MELCOR code was developed in the US for fission severe accident safety analyses and has been modified for fusion safety analyses. The ATHENA code is a multifluid version of the US-developed RELAP5 code that is also widely used for fusion safety analyses. The ENEA Fusion Division uses ATHENA in conjunction with the INTRA code for its safety analyses. The INTRA code was developed in Germany and predicts containment building pressures, temperatures and fluid flow. ENEA employs the French-developed ISAS system to couple ATHENA and INTRA. This paper provides a brief introduction of the MELCOR and ATHENA-INTRA codes and presents their modeling results for the following breaches of a water cooling line into the...

  13. A zero-dimensional EXTRAP computer code

    International Nuclear Information System (INIS)

    Karlsson, P.

    1982-10-01

    A zero-dimensional computer code has been designed for the EXTRAP experiment to predict the density and the temperature and their dependence upon paramenters such as the plasma current and the filling pressure of neutral gas. EXTRAP is a Z-pinch immersed in a vacuum octupole field and could be either linear or toroidal. In this code the density and temperature are assumed to be constant from the axis up to a breaking point from where they decrease linearly in the radial direction out to the plasma radius. All quantities, however, are averaged over the plasma volume thus giving the zero-dimensional character of the code. The particle, momentum and energy one-fluid equations are solved including the effects of the surrounding neutral gas and oxygen impurities. The code shows that the temperature and density are very sensitive to the shape of the plasma, flatter profiles giving higher temperatures and densities. The temperature, however, is not strongly affected for oxygen concentration less than 2% and is well above the radiation barrier even for higher concentrations. (Author)

  14. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  15. The development of the code package PERMAK--3D//SC--1

    International Nuclear Information System (INIS)

    Bolobov, P. A.; Oleksuk, D. A.

    2011-01-01

    Code package PERMAK-3D//SC-1 was developed for performing pin-by-pin coupled neutronic and thermal hydraulic calculation of the core fragment of seven fuel assemblies and was designed on the basis of 3D multigroup pin-by-pin code PERMAK-3D and 3D (subchannel) thermal hydraulic code SC-1 The code package predicts axial and radial pin-by-pin power distribution and coolant parameters in stimulated region (enthalpies,, velocities,, void fractions,, boiling and DNBR margins).. The report describes some new steps in code package development. Some PERMAK-3D//SC-1 outcomes of WWER calculations are presented in the report. (Authors)

  16. Roadmap for the Future of Commercial Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Rosenberg, Michael I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hart, Philip R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zhang, Jian [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Athalye, Rahul A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-01-01

    Building energy codes have significantly increased building efficiency over the last 38 years, since the first national energy code was published in 1975. The most commonly used path in energy codes, the prescriptive path, appears to be reaching a point of diminishing returns. The current focus on prescriptive codes has limitations including significant variation in actual energy performance depending on which prescriptive options are chosen, a lack of flexibility for designers and developers, the inability to handle optimization that is specific to building type and use, the inability to account for project-specific energy costs, and the lack of follow-through or accountability after a certificate of occupancy is granted. It is likely that an approach that considers the building as an integrated system will be necessary to achieve the next real gains in building efficiency. This report provides a high-level review of different formats for commercial building energy codes, including prescriptive, prescriptive packages, capacity constrained, outcome based, and predictive performance approaches. This report also explores a next generation commercial energy code approach that places a greater emphasis on performance-based criteria.

  17. Recent developments in KTF. Code optimization and improved numerics

    International Nuclear Information System (INIS)

    Jimenez, Javier; Avramova, Maria; Sanchez, Victor Hugo; Ivanov, Kostadin

    2012-01-01

    The rapid increase of computer power in the last decade facilitated the development of high fidelity simulations in nuclear engineering allowing a more realistic and accurate optimization as well as safety assessment of reactor cores and power plants compared to the legacy codes. Thermal hydraulic subchannel codes together with time dependent neutron transport codes are the options of choice for an accurate prediction of local safety parameters. Moreover, fast running codes with the best physical models are needed for high fidelity coupled thermal hydraulic / neutron kinetic solutions. Hence at KIT, different subchannel codes such as SUBCHANFLOW and KTF are being improved, validated and coupled with different neutron kinetics solutions. KTF is a subchannel code developed for best-estimate analysis of both Pressurized Water Reactor (PWR) and BWR. It is based on the Pennsylvania State University (PSU) version of COBRA-TF (Coolant Boling in Rod Arrays Two Fluids) named CTF. In this paper, the investigations devoted to the enhancement of the code numeric and informatics structure are presented and discussed. By some examples the gain on code speed-up will be demonstrated and finally an outlook of further activities concentrated on the code improvements will be given. (orig.)

  18. Recent developments in KTF. Code optimization and improved numerics

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Javier; Avramova, Maria; Sanchez, Victor Hugo; Ivanov, Kostadin [Karlsruhe Institute of Technology (KIT) (Germany). Inst. for Neutron Physics and Reactor Technology (INR)

    2012-11-01

    The rapid increase of computer power in the last decade facilitated the development of high fidelity simulations in nuclear engineering allowing a more realistic and accurate optimization as well as safety assessment of reactor cores and power plants compared to the legacy codes. Thermal hydraulic subchannel codes together with time dependent neutron transport codes are the options of choice for an accurate prediction of local safety parameters. Moreover, fast running codes with the best physical models are needed for high fidelity coupled thermal hydraulic / neutron kinetic solutions. Hence at KIT, different subchannel codes such as SUBCHANFLOW and KTF are being improved, validated and coupled with different neutron kinetics solutions. KTF is a subchannel code developed for best-estimate analysis of both Pressurized Water Reactor (PWR) and BWR. It is based on the Pennsylvania State University (PSU) version of COBRA-TF (Coolant Boling in Rod Arrays Two Fluids) named CTF. In this paper, the investigations devoted to the enhancement of the code numeric and informatics structure are presented and discussed. By some examples the gain on code speed-up will be demonstrated and finally an outlook of further activities concentrated on the code improvements will be given. (orig.)

  19. Subchannel analysis code development for CANDU fuel channel

    International Nuclear Information System (INIS)

    Park, J. H.; Suk, H. C.; Jun, J. S.; Oh, D. J.; Hwang, D. H.; Yoo, Y. J.

    1998-07-01

    Since there are several subchannel codes such as COBRA and TORC codes for a PWR fuel channel but not for a CANDU fuel channel in our country, the subchannel analysis code for a CANDU fuel channel was developed for the prediction of flow conditions on the subchannels, for the accurate assessment of the thermal margin, the effect of appendages, and radial/axial power profile of fuel bundles on flow conditions and CHF and so on. In order to develop the subchannel analysis code for a CANDU fuel channel, subchannel analysis methodology and its applicability/pertinence for a fuel channel were reviewed from the CANDU fuel channel point of view. Several thermalhydraulic and numerical models for the subchannel analysis on a CANDU fuel channel were developed. The experimental data of the CANDU fuel channel were collected, analyzed and used for validation of a subchannel analysis code developed in this work. (author). 11 refs., 3 tabs., 50 figs

  20. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology