WorldWideScience

Sample records for astec code adaptability

  1. Analysis of ASTEC code adaptability to severe accident simulation for CANDU type reactors

    International Nuclear Information System (INIS)

    Constantin, Marin; Rizoiu, Andrei

    2008-01-01

    In order to prepare the adaptation of the ASTEC code to CANDU NPP severe accident analysis two kinds of activities were performed: - analyses of the ASTEC modules from the point of view of models and options, followed by CANDU exploratory calculation for the appropriate modules/models; - preparing the specifications for ASTEC adaptation for CANDU NPP. The paper is structured in three parts: - a comparison of PWR and CANDU concepts (from the point of view of severe accident phenomena); - exploratory calculations with some ASTEC modules- SOPHAEROS, CPA, IODE, CESAR, DIVA - for CANDU type reactors specific problems; - development needs analysis - algorithms, methods, modules. (authors)

  2. Synthesis of the ASTEC integral code activities in SARNET – Focus on ASTEC V2 plant applications

    International Nuclear Information System (INIS)

    Chatelard, P.; Reinke, N.; Ezzidi, A.; Lombard, V.; Barnak, M.; Lajtha, G.; Slaby, J.; Constantin, M.; Majumdar, P.

    2014-01-01

    Highlights: • Independent assessment of the ASTEC severe accident code vs. experiments is summarised. • Main remaining modelling issues and development perspectives are identified. • Independent assessment of ASTEC code at full scale conditions is described. • Main requirements to address BWR and PHWR types of reactors are identified. - Abstract: Among the 43 organisations which joined the SARNET2 FP7 project from 2009 to 2013, 31 have been involved in the activities on the ASTEC code. This paper presents a synthesis of the main achievements that have been obtained on the ASTEC V2 integral code, jointly developed by IRSN (France) and GRS (Germany), on development, validation vs. experimental data and applications at full scale conditions for both Gen.II and Gen.III plants. As to code development, while the current V2.0 series of ASTEC versions was continuously improved (elaboration and release by IRSN and GRS of three successive V2.0 revisions), IRSN and GRS have also intensively continued in parallel the elaboration of the second ASTEC V2 major version (version V2.1) to be delivered end of 2014. Regarding code validation vs. experiments, the partners have assessed the V2.0 version and subsequent revisions vs. more than 50 experiments; this extended assessment notably confirmed that most models are today close to the State of the Art, while it also corroborated the yet known key-topics on which modelling efforts should focus in priority. As to plant applications, the comparison of ASTEC results with other codes allows concluding on a globally good agreement for in-vessel and ex-vessel severe accident progression. As to ASTEC adaptations to BWR and PHWR, significant achievements have been obtained through the elaboration and integration in the future V2.1 version of dedicated core degradation models, notably to account for multi coolant flows

  3. ICARE/CATHARE and ASTEC code development trends

    International Nuclear Information System (INIS)

    Chatelard, P.; Dorsselaere, J.-P. van

    2000-01-01

    Regarding the computer code development for simulation of LWR severe accidents, IPSN developed a two-tier approach based on detailed codes such as ICARE/CATHARE and simplified models to be assembled in the ASTEC integral code. The ICARE/CATHARE code results from the coupling between the ICARE2 code modelling the core degradation phenomena and the thermalhydraulics code CATHARE2. It allows to calculate PWR and VVER severe accident sequences in the whole RCS. The modelling of the early degradation phase can be considered as rather complete in the ICARE/CATHARE V1 mod1 version (to be released by mid-2000) whereas some models are still missing for the late phase. The main future developments (ICARE/CATHARE V2) will concern the multi-dimensional thermalhydraulics, the quenching of partially damaged cores (mechanical and chemical effects), the debris bed two-phase thermalhydraulics (including reflooding) and the corium behaviour in the lower head. The main other physical improvements should concern the behaviour of boron carbide control rods, the processes governing the core loss of geometry (transition phase) and the oxidation of relocated melts. The ASTEC (Accident Source Term Evaluation Code) integral code, commonly developed by IPSN and GRS, aims to predict an entire LWR (PWR, VVER and BWR) severe accident sequence from the initiating event through to FP release out of the containment, for source term, PSA level 2, or accident management studies. The version ASTEC VO.3 to be released by mid-2000 can be considered now as robust and fast-running enough (between 2 and 12 hours for a one day accident) and allows to perform, with a containment multi-compartment configuration, any scenario accident study accounting for the main safety systems and operator procedures (spray, recombiner, etc.). The next version ASTEC V1, to be released beginning of 2002, will include the frontend simulation and improve modelling of in-vessel core degradation. A large validation activity will

  4. European Validation of the Integral Code ASTEC (EVITA)

    International Nuclear Information System (INIS)

    Allelein, H.-J.; Neu, K.; Dorsselaere, J.P. Van

    2005-01-01

    The main objective of the European Validation of the Integral Code ASTEC (EVITA) project is to distribute the severe accident integral code ASTEC to European partners in order to apply the validation strategy issued from the VASA project (4th EC FWP). Partners evaluate the code capability through validation on reference experiments and plant applications accounting for severe accident management measures, and compare results with reference codes. The basis version V0 of ASTEC (Accident Source Term Evaluation Code)-commonly developed and basically validated by GRS and IRSN-was made available in late 2000 for the EVITA partners on their individual platforms. Users' training was performed by IRSN and GRS. The code portability on different computers was checked to be correct. A 'hot line' assistance was installed continuously available for EVITA code users. The actual version V1 has been released to the EVITA partners end of June 2002. It allows to simulate the front-end phase by two new modules:- for reactor coolant system 2-phase simplified thermal hydraulics (5-equation approach) during both front-end and core degradation phases; - for core degradation, based on structure and main models of ICARE2 (IRSN) reference mechanistic code for core degradation and on other simplified models. Next priorities are clearly identified: code consolidation in order to increase the robustness, extension of all plant applications beyond the vessel lower head failure and coupling with fission product modules, and continuous improvements of users' tools. As EVITA has very successfully made the first step into the intention to provide end-users (like utilities, vendors and licensing authorities) with a well validated European integral code for the simulation of severe accidents in NPPs, the EVITA partners strongly recommend to continue validation, benchmarking and application of ASTEC. This work will continue in Severe Accident Research Network (SARNET) in the 6th Framework Programme

  5. Status of the ASTEC integral code

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Jacq, F.; Allelein, H.J.

    2000-01-01

    The ASTEC (Accident Source Term Evaluation Code) integrated code is developed since 1997 in close collaboration by IPSN and GRS to predict an entire LWR severe accident sequence from the initiating event up to Fission Product (FP) release out of the containment. The applications of such a code are source term determination studies, scenario evaluations, accident management studies and Probabilistic Safety Assessment level 2 (PSA-2) studies. The version V0 of ASTEC is based on the RCS modules of the ESCADRE integrated code (IPSN) and on the upgraded RALOC and FIPLOC codes (GRS) for containment thermalhydraulics and aerosol behaviour. The latest version V0.2 includes the general feed-back from the overall validation performed in 1998 (25 separate-effect experiments, PHEBUS.FP FPT1 integrated experiment), some modelling improvements (i.e. silver-iodine reactions in the containment sump), and the implementation of the main safety systems for Severe Accident Management. Several reactor-applications are under way on French and German PWR, and on VVER-1000, all with a multi-compartment configuration of the containment. The total IPSN-GRS manpower involved in ASTEC project is today about 20 men/year. The main evolution of the next version V1, foreseen end of 2001, concerns the integration of the front-end phase and the improvement of the in-vessel degradation late-phase modelling. (author)

  6. ASTEC V2. Overview of code development and application at GRS

    International Nuclear Information System (INIS)

    Reinke, N.; Nowack, H.; Sonnenkalb, M.

    2011-01-01

    The integral code ASTEC (Accident Source Term Evaluation Code) commonly developed since 1996 by the French IRSN and the German GRS is a fast running programme, which allows the calculation of entire sequences of severe accidents (SA) in light water reactors from the initiating event up to the release of fission products into the environment, thereby covering all important in-vessel and containment phenomena. Thus, the main ASTEC application fields are intended to be accident sequence studies, uncertainty and sensitivity studies, probabilistic safety analysis level 2 as well as support to experiments. The modular structure of ASTEC allows running each module independently and separately, e.g. for separate effects analyses as well as a combination of multiple modules for coupled effects testing and integral analyses. Subject of this paper is an overview of the new V2 series of the ASTEC code system and presentation of exemplary results for the application to severe accidents sequences at PWRs. (orig.)

  7. VVER 1000 SBO calculations with pressuriser relief valve stuck open with ASTEC computer code

    International Nuclear Information System (INIS)

    Atanasova, B.P.; Stefanova, A.E.; Groudev, P.P.

    2012-01-01

    Highlights: ► We modelled the ASTEC input file for accident scenario (SBO) and focused analyses on the behaviour of core degradation. ► We assumed opening and stuck-open of pressurizer relief valve during performance of SBO scenario. ► ASTEC v1.3.2 has been used as a reference code for the comparison study with the new version of ASTEC code. - Abstract: The objective of this paper is to present the results obtained from performing the calculations with ASTEC computer code for the Source Term evaluation for specific severe accident transient. The calculations have been performed with the new version of ASTEC. The ASTEC V2 code version is released by the French IRSN (Institut de Radioprotection at de surete nucleaire) and Gesellschaft für Anlagen-und Reaktorsicherheit (GRS), Germany. This investigation has been performed in the framework of the SARNET2 project (under the Euratom 7th framework program) by Institute for Nuclear Research and Nuclear Energy – Bulgarian Academy of Science (INRNE-BAS).

  8. On boundary layer modelling using the ASTEC code

    International Nuclear Information System (INIS)

    Smith, B.L.

    1991-07-01

    The modelling of fluid boundary layers adjacent to non-slip, heated surface using the ASTEC code is described. The pricipal boundary layer characteristics are derived using simple dimensional arguments and these are developed into criteria for optimum placement of the computational mesh to achieve realistic simulation. In particular, the need for externally-imposed drag and heat transfer correlations as a function of the local mesh concentration is discussed in the context of both laminar and turbulent flow conditions. Special emphasis is placed in the latter case on the (k-ε) turbulence model, which is standard in the code. As far as possible, the analyses are pursued from first principles, so that no comprehensive knowledge of the history of the subject is required for the general ASTEC user to derive practical advice from the document. Some attention is paid to the use of heat transfer correlations for internal solid/fluid surfaces, whose treatment is not straightforward in ASTEC. It is shown that three formulations are possible to effect the heat transfer, called Explicit, Jacobian and Implicit. The particular advantages and disadvantages of each are discussed with regard to numerical stability and computational efficiency. (author) 18 figs., 1 tab., 39 refs

  9. Evolution of ASTEC V1.2 rev.1 code for WWER-1000 reactors/SBO sequence

    International Nuclear Information System (INIS)

    Georgieva, J.; Stefanova, A.; Groudev, P.; Tusheva, P.; Kalchev, B.; Passalacqua, R.

    2006-01-01

    In this paper a comparison between calculations of severe accidents occurred from WWER-1000 with ASTEC code specified for an event of full unloading with relief valves stuck opened with no hydroaccumulators intervention is presented. The purpose of the analyses provided is to present the relationship between the improvements of the actual version (ASTEC Vl.2 rev. 1) and ASTEC V1.1 p2 like: code modifications, incoming data improvements. Such discrepancies are to be examined. Case by case suggestions for ASTEC improvements are to be provided

  10. Analysis of SCARABEE BE+3 experiment with ASTEC-Na and comparison with other SFR safety analysis codes

    International Nuclear Information System (INIS)

    Bandini, Giacomino; Ederli, Stefano; Perez-Martin, Sara; Pfrang, Werner; Girault, Nathalie; Cloarec, Laure

    2017-01-01

    The ASTEC-Na code was further developed and assessed in the frame of JASMIN project of the 7th EU Framework Program to extend the original capability of ASTEC, dealing with severe accident analysis in LWR to Sodium-cooled Fast Reactors (SFR). The in-pile BE+3 experiment from the SCARABEE-N program has been simulated with ASTEC-Na for thermal-hydraulic models validation purpose. The adequacy of ASTEC-Na thermal-hydraulic models has been also investigated through the comparison with other safety analysis codes. The analysis of SCARABEE BE+3 test confirms the good performance of ASTEC-Na code in the calculation of single-phase conditions and boiling onset, while larger deviations are encountered in the analysis of the two-phase conditions, mainly regarding the propagation of the boiling front. Furthermore, reasonable agreement was found with other code results. (author)

  11. The European source-term evaluation code ASTEC: status and applications, including CANDU plant applications

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Giordano, P.; Kissane, M.P.; Montanelli, T.; Schwinges, B.; Ganju, S.; Dickson, L.

    2004-01-01

    Research on light-water reactor severe accidents (SA) is still required in a limited number of areas in order to confirm accident-management plans. Thus, 49 European organizations have linked their SA research in a durable way through SARNET (Severe Accident Research and management NETwork), part of the European 6th Framework Programme. One goal of SARNET is to consolidate the integral code ASTEC (Accident Source Term Evaluation Code, developed by IRSN and GRS) as the European reference tool for safety studies; SARNET efforts include extending the application scope to reactor types other than PWR (including VVER) such as BWR and CANDU. ASTEC is used in IRSN's Probabilistic Safety Analysis level 2 of 900 MWe French PWRs. An earlier version of ASTEC's SOPHAEROS module, including improvements by AECL, is being validated as the Canadian Industry Standard Toolset code for FP-transport analysis in the CANDU Heat Transport System. Work with ASTEC has also been performed by Bhabha Atomic Research Centre, Mumbai, on IPHWR containment thermal hydraulics. (author)

  12. Validation of Code ASTEC with LIVE-L1 Experimental Results

    International Nuclear Information System (INIS)

    Bachrata, Andrea

    2008-01-01

    The severe accidents with core melting are considered at the design stage of project at Generation 3+ of Nuclear Power Plants (NPP). Moreover, there is an effort to apply the severe accident management to the operated NPP. The one of main goals of severe accidents mitigation is corium localization and stabilization. The two strategies that fulfil this requirement are: the in-vessel retention (e.g. AP-600, AP- 1000) and the ex-vessel retention (e.g. EPR). To study the scenario of in-vessel retention, a large experimental program and the integrated codes have been developed. The LIVE-L1 experimental facility studied the formation of melt pools and the melt accumulation in the lower head using different cooling conditions. Nowadays, a new European computer code ASTEC is being developed jointly in France and Germany. One of the important steps in ASTEC development in the area of in-vessel retention of corium is its validation with LIVE-L1 experimental results. Details of the experiment are reported. Results of the ASTEC (module DIVA) application to the analysis of the test are presented. (author)

  13. Thermal-hydraulic and aerosol containment phenomena modelling in ASTEC severe accident computer code

    International Nuclear Information System (INIS)

    Kljenak, Ivo; Dapper, Maik; Dienstbier, Jiri; Herranz, Luis E.; Koch, Marco K.; Fontanet, Joan

    2010-01-01

    Transients in containment systems of different scales (Phebus.FP containment, KAEVER vessel, Battelle Model Containment, LACE vessel and VVER-1000 nuclear power plant containment) involving thermal-hydraulic phenomena and aerosol behaviour, were simulated with the computer integral code ASTEC. The results of the simulations in the first four facilities were compared with experimental results, whereas the results of the simulated accident in the VVER-1000 containment were compared to results obtained with the MELCOR code. The main purpose of the simulations was the validation of the CPA module of the ASTEC code. The calculated results support the applicability of the code for predicting in-containment thermal-hydraulic and aerosol phenomena during a severe accident in a nuclear power plant.

  14. Simulation of hydrogen deflagration experiments in the ENACCEF facility using ASTEC code

    International Nuclear Information System (INIS)

    Povilaitis, Mantas; Urbonavicius, Egidijus; Rimkevicius, Sigitas

    2011-01-01

    During a hypothetic severe accident in the NPP involving degradation of the core of a light water reactor, hydrogen could be generated and released into the containment atmosphere posing a deflagration or even a detonation risk. In the case of deflagration, the integrity of the containment would be threatened by the increase of the containment atmosphere pressure and temperature. Other risks of containment damage due to turbulent flames exist, caused by high pressure pulses, shock waves and etc. For the simulation of such processes a reliable numerical codes are needed. Despite flame acceleration being largely studied for homogeneous hydrogen - air mixtures, there are still unresolved issues in this research area, e.g., the effect of turbulence level on flame acceleration and quenching. This paper presents simulations of hydrogen deflagration experiments in the ENACCEF facility using ASTEC code, performed in the frames of International Standard Program No. 49 and SARNET2 project. Experiments and simulations were performed with the aim of evaluating the codes' (a number of participants with various codes participated in the project) capabilities to simulate hydrogen combustion. ASTEC code is an integral lumped-parameter approach based nuclear safety analysis code. For the presented simulations, ASTEC modules CPA (containment thermohydromechanics) and FRONT (hydrogen deflagration) were used. Paper present ENACCEF test facility, its nodalisation schemes developed for the calculations, simulated experiments and simulations' results. Brief description of FRONT module is also presented. Calculations' results are compared with experimental results and analyzed. (author)

  15. ASTEC validation on PANDA SETH

    International Nuclear Information System (INIS)

    Bentaib, Ahmed; Bleyer, Alexandre; Schwarz, Siegfried

    2009-01-01

    The ASTEC code development by IRSN and GRS is aimed to provide an integral code for the simulation of the whole course of severe accidents in Light-Water Reactors. ASTEC is a complex system of codes for reactor safety assessment. In this validation, only the thermal-hydraulic module of ASTEC code is used. ASTEC is a lumped-parameter code able to represent multi-compartment containments. It uses the following main elements: zones (compartments), junctions (liquids and atmospherics) and structures. The zones are connected by junctions and contain steam, water and non condensable gases. They exchange heat with structures by different heat transfer regimes: convection, radiation and condensation. This paper presents the validation of ASTEC V1.3 on the tests T9 and T9bis, of the PANDA OECD/SETH experimental program, investigating the impact of injection velocity and steam condensation on the plume shape and on the gas distribution. Dedicated meshes were developed to simulate the test facility with the two vessels DW1, DW2 and the interconnection pipe. The obtained numerical results are analyzed and compared to the experiments. The comparison shows a good agreement between experiments and calculations. (author)

  16. Contributions to the validation of the ASTEC V1 code

    International Nuclear Information System (INIS)

    Constantin, Marin; Rizoiu, Andrei; Turcu, Ilie

    2004-01-01

    In the frame of PHEBEN2 project (Validation of the severe accidents codes for applications to nuclear power plants, based on the PHEBUS FP experiments), a project developed within the EU research Frame Program 5 (FP5), the INR-Pitesti's team has received the task of determining the ASTEC code sensitivity. The PHEBEN2 project has been initiated in 1998 and gathered 13 partners from 6 EU member states. To the project 4 partners from 3 candidate states (Hungary, Bulgaria and Romania) joined later. The works were contracted with the European Commission (under FIKS-CT1999-00009 contract) that supports financially the research effort up to about 50%. According to the contract provisions, INR's team participated in developing the Working Package 1 (WP1) which refers to validation of the integral computation codes that use the PHOEBUS experimental data and the Working Package 3 (WP3) referring to the evaluation of the codes to be applied in nuclear power plants for risk evaluation, nuclear safety margin evaluation and determination/evaluation of the measures to be adopted in case of severe accident. The present work continues the efforts to validate preliminarily the ASTEC code. Focused are the the stand-alone sensitivity analyses applied to two most important modules of the code, namely DIVA and SOPHAEROS

  17. Fission product release from nuclear fuel I. Physical modelling in the ASTEC code

    International Nuclear Information System (INIS)

    Brillant, G.; Marchetto, C.; Plumecocq, W.

    2013-01-01

    Highlights: • Physical modeling of FP and SM release in ASTEC is presented. • The release is described as solid state diffusion within fuel for high volatile FP. • The release is described as FP vaporisation for semi volatile FP. • The release is described as fuel vaporisation for low volatile FP. • ASTEC validation is presented in the second paper. - Abstract: This article is the first of a series of two articles dedicated to the mechanisms of fission product release from a degraded core as they are modelled in the ASTEC code. The ASTEC code aims at simulating severe accidents in nuclear reactors from the initiating event up to the radiological consequences on the environment. This code is used for several applications such as nuclear plant safety evaluation including probabilistic studies and emergency preparedness. To cope with the requirements of robustness and low calculation time, the code is based on a semi-empirical approach and only the main limiting phenomena that govern the release from intact rods and from debris beds are considered. For solid fuel, fission products are classified into three groups, depending on their degree of volatility. The kinetics of volatile fission products release depend on the rate-limiting process of solid-state diffusion through fuel grains. For semi-volatile fission products, the release from the open fuel porosities is assumed to be governed by vaporisation and mass transfer processes. The key phenomenon for the release of low volatile fission products is supposed to be fuel volatilisation. A similar approach is used for the release of fission products from a rubble bed. An in-depth validation of the code including both analytical and integral experiments is the subject of the second article

  18. ASTEC V2 severe accident integral code main features, current V2.0 modelling status, perspectives

    International Nuclear Information System (INIS)

    Chatelard, P.; Reinke, N.; Arndt, S.; Belon, S.; Cantrel, L.; Carenini, L.; Chevalier-Jabet, K.; Cousin, F.; Eckel, J.; Jacq, F.; Marchetto, C.; Mun, C.; Piar, L.

    2014-01-01

    The severe accident integral code ASTEC, jointly developed since almost 20 years by IRSN and GRS, simulates the behaviour of a whole nuclear power plant under severe accident conditions, including severe accident management by engineering systems and procedures. Since 2004, the ASTEC code is progressively becoming the reference European severe accident integral code through in particular the intensification of research activities carried out in the frame of the SARNET European network of excellence. The first version of the new series ASTEC V2 was released in 2009 to about 30 organizations worldwide and in particular to SARNET partners. With respect to the previous V1 series, this new V2 series includes advanced core degradation models (issued from the ICARE2 IRSN mechanistic code) and necessary extensions to be applicable to Gen. III reactor designs, notably a description of the core catcher component to simulate severe accidents transients applied to the EPR reactor. Besides these two key-evolutions, most of the other physical modules have also been improved and ASTEC V2 is now coupled to the SUNSET statistical tool to make easier the uncertainty and sensitivity analyses. The ASTEC models are today at the state of the art (in particular fission product models with respect to source term evaluation), except for quenching of a severely damage core. Beyond the need to develop an adequate model for the reflooding of a degraded core, the main other mean-term objectives are to further progress on the on-going extension of the scope of application to BWR and CANDU reactors, to spent fuel pool accidents as well as to accidents in both the ITER Fusion facility and Gen. IV reactors (in priority on sodium-cooled fast reactors) while making ASTEC evolving towards a severe accident simulator constitutes the main long-term objective. This paper presents the status of the ASTEC V2 versions, focussing on the description of V2.0 models for water-cooled nuclear plants

  19. An algorithm for solving thermalhydraulic equations in complex geometries: the Astec code

    International Nuclear Information System (INIS)

    Lonsdale, R.D.

    1987-01-01

    By applying a finite volume approach to a finite element mesh, the ASTEC computer code allows three-dimensional incompressible fluid flow and heat transfer in complex geometries to be simulated realistically, without making excessive demands on computing resources. The methods used in the code are described, and examples of the application of the code are presented

  20. Aerosol sampling and Transport Efficiency Calculation (ASTEC) and application to surtsey/DCH aerosol sampling system: Code version 1.0: Code description and user's manual

    International Nuclear Information System (INIS)

    Yamano, N.; Brockmann, J.E.

    1989-05-01

    This report describes the features and use of the Aerosol Sampling and Transport Efficiency Calculation (ASTEC) Code. The ASTEC code has been developed to assess aerosol transport efficiency source term experiments at Sandia National Laboratories. This code also has broad application for aerosol sampling and transport efficiency calculations in general as well as for aerosol transport considerations in nuclear reactor safety issues. 32 refs., 31 figs., 7 tabs

  1. Simulation of the fuel rod bundle test QUENCH-03 using the system codes ASTEC and ATHLET-CD

    International Nuclear Information System (INIS)

    Kruse, P.; Koch, M.K.

    2011-01-01

    The QUENCH-03 test was performed on the 21. of January 1999 at FZK (Forschungszentrum Karlsruhe) to investigate the behaviour on reflood of PWR (Pressurized Water Reactor) fuel rods with little oxidation. This paper presents the results of the simulation of QUENCH-03 performed with the version V1.3 of the integral code ASTEC (Accident Source Term Evaluation Code) which is being developed by IRSN (France) in cooperation with GRS (Germany) and with the program version 2.1A of the mechanistic code ATHLET-CD (Analysis of Thermal-hydraulics of Leaks and Transients - Core Degradation) which is under development by GRS. At first the QUENCH test facility and the QUENCH test program in general are described. The test conduct of the test QUENCH-03 follows as well as a description of the used codes ASTEC and ATHLET-CD with the associated modeling of the test section. The results of this calculation show that during the heat-up and transient phase both codes can calculate bundle and shroud temperatures as well as the hydrogen production in good approximation to the experimental data. During the quench phase and up to the end of the test only the oxidation model PRATER of ASTEC simulates the hydrogen production very well, the other oxidation models of ASTEC cannot calculate to some extent the measured amount of hydrogen. ATHLET-CD underestimates the integral amount at the end of the test. In the ASTEC calculations the temperatures during the quench phase show qualitatively good results, only time delays on some elevations of the bundle could be noticed. ATHLET-CD reproduces the thermal behaviour up to the first temperature escalation very well, after that the temperatures are partly over-estimated. The time delay recognized in the ASTEC calculations are seen as well. The results of the integral code ASTEC emphasize that the calculation of QUENCH-03 is possible and leading to good results concerning hydrogen release and corresponding temperatures. Because the QUENCH-03 test was

  2. Severe damage analysis of VVER 1000 following large break LOCA using Astec code

    International Nuclear Information System (INIS)

    Chatterjee, B.; Mukhopadhyay, D.; Lele, H.G.; Ghosh, A.K.; Kushwaha, H.S.

    2007-01-01

    Severe accident analysis of a reactor is an important aspect in the evaluation of source term. This in turn helps in emergency planning. An analysis has been carried out for VVER-1000 (V320) reactor following Large Break LOCA (loss of coolant accident) along with Station Blackout (SBO). Computer code ASTEC (jointly developed by IRSN, France, and GRS, Germany) is used for analyzing the transient. This integral code has been designed to be used as reference code for PSA2 studies. Severe accident analysis is carried out for an accident initiated by Large break LOCA along with SBO. Two cases have been analysed with the version ASTEC V1.2-rev1. In the first case hydro-accumulators are considered not available while the second case has been analysed with hydro accumulators. In this paper, ASTEC predictions have been studied for the in-vessel phase of the accident till vessel failure. The vessel failure was observed at 6979 s when accumulators were assumed not available. The vessel failure was quite delayed (19294 s) with operating accumulators. The hydrogen production was found to be very large (22% of total Zr inventory) in the case with accumulators compared to the case without accumulators (1.5% of total Zr inventory)

  3. Modelling and description of PHEBUS FPT1 experiment with the computer code ASTEC

    International Nuclear Information System (INIS)

    Tusheva, P.; Kalchev, B.

    2005-01-01

    The PHEBUS Fission Product (FP) programme was initiated in 1988 after major severe reactor accidents (at Three Mile Island and Chernobyl). The main objective of the programme is to study the release, transport and retention of fission products in an in-pile facility under severe accident conditions in a LWR. This paper covers the FPT 1 experiment description and modelling by the ASTEC code. The main calculated events, the temperature evolution at the middle part of the test bundle, the state of the bundle degradation at the end of calculation and the calculated Hydrogen production are presented and discussed. The obtained results from the ASTEC code calculation show a good agreement between experimental and calculated results. The calculated Hydrogen production is slightly overestimated in comparison with the experimental results

  4. Status of emergency spray modelling in the integral code ASTEC

    International Nuclear Information System (INIS)

    Plumecocq, W.; Passalacqua, R.

    2001-01-01

    Containment spray systems are emergency systems that would be used in very low probability events which may lead to severe accidents in Light Water Reactors. In most cases, the primary function of the spray would be to remove heat and condense steam in order to reduce pressure and temperature in the containment building. Spray would also wash out fission products (aerosols and gaseous species) from the containment atmosphere. The efficiency of the spray system in the containment depressurization as well as in the removal of aerosols, during a severe accident, depends on the evolution of the spray droplet size distribution with the height in the containment, due to kinetic and thermal relaxation, gravitational agglomeration and mass transfer with the gas. A model has been developed taking into account all of these phenomena. This model has been implemented in the ASTEC code with a validation of the droplets relaxation against the CARAIDAS experiment (IPSN). Applications of this modelling to a PWR 900, during a severe accident, with special emphasis on the effect of spray on containment hydrogen distribution have been performed in multi-compartment configuration with the ASTEC V0.3 code. (author)

  5. Comparative severe accident analysis of WWER 1000/B 320 LOCA DN100 computed by computer codes ASTEC V1.1 and SCDAP/RELAP5

    International Nuclear Information System (INIS)

    Kalchev, B.; Dimov, D.; Tusheva, P.; Mladenov, I.

    2005-01-01

    This paper presents the modelling approach for LOCA 100 mm sequence for WWER 1000-B 320 type of reactor with the integral ASTEC computer code and SCDAP/RELAP5 computer code. As a basic input deck the reference input file for Balakovo NPP from the released ASTEC CD has been applied. As a first part of the calculations for the SBLOCA sequence the ASTEC v1.1 modules CESAR, DIVA and CPA have been activated in a coupled mode. For SCDAP/RELAP5 calculation input deck for WWER 1000-B 320 has been applied which meant to be closer to the initial boundary conditions applied for ASTEC WWER 1000 input deck. A SBLOCA 100 mm comparison between ASTEC v1.1 and SCADAP/RELAP5 has been presented. ASTEC predicts vessel failure at 15620 s. ASTEC and SCDAP/RELAP5 give close but not similar results - this could be observed on the trends. The comparison of 100 mm-break shows that SCDAP/RELAP5 predicts clear phenomenological changes in primary pressure evolution and molten pool formation. Similar hydrogen production mass for both codes around 5000 s is detected

  6. Containment Modelling with the ASTEC Code

    International Nuclear Information System (INIS)

    Sadek, Sinisa; Grgic, Davor

    2014-01-01

    ASTEC is an integral computer code jointly developed by Institut de Radioprotection et de Surete Nucleaire (IRSN, France) and Gesellschaft fur Anlagen-und Reaktorsicherheit (GRS, Germany) to assess the nuclear power plant behaviour during a severe accident (SA). It consists of 13 coupled modules which compute various SA phenomena in primary and secondary circuits of the nuclear power plants (NPP), and in the containment. The ASTEC code was used to model and to simulate NPP behaviour during a postulated station blackout accident in the NPP Krsko, a two-loop pressurized water reactor (PWR) plant. The primary system of the plant was modelled with 110 thermal hydraulic (TH) volumes, 113 junctions and 128 heat structures. The secondary system was modelled with 76 TH volumes, 77 junctions and 87 heat structures. The containment was modelled with 10 TH volumes by taking into account containment representation as a set of distinctive compartments, connected with 23 junctions. A total of 79 heat structures were used to simulate outer containment walls and internal steel and concrete structures. Prior to the transient calculation, a steady state analysis was performed. In order to achieve correct plant initial conditions, the operation of regulation systems was modelled. Parameters which were subjected to regulation were the pressurizer pressure, the pressurizer narrow range level and steam mass flow rates in the steam lines. The accident analysis was focused on containment behaviour, however the complete integral NPP analysis was carried out in order to provide correct boundary conditions for the containment calculation. During the accident, the containment integrity was challenged by release of reactor system coolant through degraded coolant pump seals and, later in the accident following release of the corium out of the reactor pressure vessel, by the molten corium concrete interaction and direct containment heating mechanisms. Impact of those processes on relevant

  7. Severe accident analysis in a two-loop PWR nuclear power plant with the ASTEC code

    International Nuclear Information System (INIS)

    Sadek, Sinisa; Amizic, Milan; Grgic, Davor

    2013-01-01

    The ASTEC/V2.0 computer code was used to simulate a hypothetical severe accident sequence in the nuclear power plant Krsko, a 2-loop pressurized water reactor (PWR) plant. ASTEC is an integral code jointly developed by Institut de Radioprotection et de Surete Nucleaire (IRSN, France) and Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, Germany) to assess nuclear power plant behaviour during a severe accident. The analysis was conducted in 2 steps. First, the steady state calculation was performed in order to confirm the applicability of the plant model and to obtain correct initial conditions for the accident analysis. The second step was the calculation of the station blackout accident with a leakage of the primary coolant through degraded reactor coolant pump seals, which was a small LOCA without makeup capability. Two scenarios were analyzed: one with and one without the auxiliary feedwater (AFW). The latter scenario, without the AFW, resulted in earlier core damage. In both cases, the accident ended with a core melt and a reactor pressure vessel failure with significant release of hydrogen. In addition, results of the ASTEC calculation were compared with results of the RELAP5/SCDAPSIM calculation for the same transient scenario. The results comparison showed a good agreement between predictions of those 2 codes. (orig.)

  8. ASTEC application to in-vessel corium retention

    International Nuclear Information System (INIS)

    Tarabelli, D.; Ratel, G.; Pelisson, R.; Guillard, G.; Barnak, M.; Matejovic, P.

    2009-01-01

    This paper summarizes the work done in the SARNET European Network of Excellence on Severe Accidents (6th Framework Programme of the European Commission) on the capability of the ASTEC code to simulate in-vessel corium retention (IVR). This code, jointly developed by the French Institut de Radioprotection et de Surete Nucleaire (IRSN) and the German Gesellschaft fuer Anlagen und Reaktorsicherheit mbH (GRS) for simulation of severe accidents, is now considered as the European reference simulation tool. First, the DIVA module of ASTEC code is briefly introduced. This module treats the core degradation and corium thermal behaviour, when relocated in the reactor lower head. Former ASTEC V1.2 version assumed a predefined stratified molten pool configuration with a metallic layer on the top of the volumetrically heated oxide pool. In order to reflect the results of the MASCA project, improved models that enable modelling of more general corium pool configurations were implemented by the CEA (France) into the DIVA module of the ASTEC V1.3 code. In parallel, the CEA was working on ASTEC modelling of the external reactor vessel cooling (ERVC). The capability of the ASTEC CESAR circuit thermal-hydraulics to simulate the ERVC was tested. The conclusions were that the CESAR module is capable of simulating this system although some numerical and physical instabilities can occur. Developments were then made on the coupling between both DIVA and CESAR modules in close collaboration with IRSN. In specific conditions, code oscillations remain and an analysis was made to reduce the numerical part of these oscillations. A comparison of CESAR results of the SULTAN experiments (CEA) showed an agreement on the pressure differences. The ASTEC V1.2 code version was applied to IVR simulation for VVER-440/V213 reactors assuming defined corium mass, composition and decay heat. The external cooling of reactor wall was simulated by applying imposed coolant temperature and heat transfer

  9. ASTEC V2 severe accident integral code: Fission product modelling and validation

    International Nuclear Information System (INIS)

    Cantrel, L.; Cousin, F.; Bosland, L.; Chevalier-Jabet, K.; Marchetto, C.

    2014-01-01

    One main goal of the severe accident integral code ASTEC V2, jointly developed since almost more than 15 years by IRSN and GRS, is to simulate the overall behaviour of fission products (FP) in a damaged nuclear facility. ASTEC applications are source term determinations, level 2 Probabilistic Safety Assessment (PSA2) studies including the determination of uncertainties, accident management studies and physical analyses of FP experiments to improve the understanding of the phenomenology. ASTEC is a modular code and models of a part of the phenomenology are implemented in each module: the release of FPs and structural materials from degraded fuel in the ELSA module; the transport through the reactor coolant system approximated as a sequence of control volumes in the SOPHAEROS module; and the radiochemistry inside the containment nuclear building in the IODE module. Three other modules, CPA, ISODOP and DOSE, allow respectively computing the deposition rate of aerosols inside the containment, the activities of the isotopes as a function of time, and the gaseous dose rate which is needed to model radiochemistry in the gaseous phase. In ELSA, release models are semi-mechanistic and have been validated for a wide range of experimental data, and noticeably for VERCORS experiments. For SOPHAEROS, the models can be divided into two parts: vapour phase phenomena and aerosol phase phenomena. For IODE, iodine and ruthenium chemistry are modelled based on a semi-mechanistic approach, these FPs can form some volatile species and are particularly important in terms of potential radiological consequences. The models in these 3 modules are based on a wide experimental database, resulting for a large part from international programmes, and they are considered at the state of the art of the R and D knowledge. This paper illustrates some FPs modelling capabilities of ASTEC and computed values are compared to some experimental results, which are parts of the validation matrix

  10. Progress and perspectives of ASTEC applications in the European Network SARNET

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Allelein, H.J.; Neu, K.

    2006-01-01

    comparisons to be performed more intensively in the next period is to learn whether the differences on results are caused by differences in physical models and/or in safety systems modelling. These benchmarks will also more focus on fission products behaviour and on specific parts of the sequences such as MCCI. Concrete action plans and associated teams have been set up for BWR and CANDU model adaptation and future benchmarks. At short term, the code evolution will focus on feedback from PSA level2 1300 and SARNET ASTEC V1 applications, and on a new model of reflooding of a degraded core. The documentation will be largely improved, mainly users manuals and users guidelines. In parallel, the preparation by IRSN-GRS of a new family V2 of ASTEC versions has started. The general specifications will account for the needs as expressed by the SARNET users. ASTEC V2.0 is planned in 2008, where the ICARE2 IRSN mechanistic code will be the new core degradation module. This ASTEC version will include the EPR applicability, the simulation of external vessel cooling for new reactor designs and a full modelling of Ruthenium behaviour in circuit and containment for air ingress situations. Beyond, future evolutions of ASTEC code will act as repository of knowledge created in SARNET and in the international context. The possible use of ASTEC to analyse severe accident sequences in future reactors (Generation IV, ITER) is under consideration. (authors)

  11. ASTEC applications to VVER-440/V213 reactors

    Energy Technology Data Exchange (ETDEWEB)

    Matejovic, Peter, E-mail: ivstt@nextra.sk; Barnak, Miroslav; Bachraty, Milan; Vranka, Lubomir

    2014-06-01

    Since the beginning of ASTEC development by IRSN and GRS the code was widely applied to VVER reactors. In this paper, at first specific features of VVER-440/V213 reactor design that are important from the modelling point of view are briefly described. Then the validation of ASTEC code with focus on its applicability to VVER reactors is briefly summarised and the results obtained with the ASTEC V2.0-rev1 version for the ISP-33 PACTEL natural circulation experiment are presented. In the next section the application of ASTEC V2.0-rev1 code in upgrade of VVER-440/V213 NPPs to cope with consequences of severe accidents is described. This upgrade includes adoption of in-vessel retention via external reactor vessel cooling and installation of large capacity passive autocatalytic recombiners. Results of analysis with focus on corium localisation and stabilisation inside reactor vessel, hydrogen control in confinement and prevention of long-term confinement pressurisation are presented.

  12. First analysis of AGS0, LT2 and E9 CABRI tests with the new SFR safety code ASTEC-Na

    International Nuclear Information System (INIS)

    Perez-Martin, Sara; Bandini, Giacomino; Matuzas, Vaidas; Buck, Michael; Girault, Nathalie

    2015-01-01

    Within the framework of the European JASMIN project, the ASTEC-Na code is being developed for safety analysis of severe accidents in SFR. In the first phase of validation of the ASTEC-Na fuel thermo-mechanical models three in-pile tests conducted in the CABRI experimental reactor have been selected to be analysed. We present here the preliminary results of the simulation of two Transient Over Power tests and one power ramp test (AGS0, LT2 and E9, respectively) where no pin failure occurred during the transient. We present the comparison of ASTEC-Na results against experimental data and other safety code results for the initial steady state conditions prior to the transient onset as well as for the fuel pin behaviour during the transients. (author)

  13. Modeling of severe accident sequences with the new modules CESAR and DIVA of ASTEC system code

    International Nuclear Information System (INIS)

    Pignet, Sophie; Guillard, Gaetan; Barre, Francois; Repetto, Georges

    2003-01-01

    Systems of computer codes, so-called 'integral' codes, are being developed to simulate the scenario of a hypothetical severe accident in a light water reactor, from the initial event until the possible radiological release of fission products out of the containment. They couple the predominant physical phenomena that occur in the different reactor zones and simulate the actuation of safety systems by procedures and by operators. In order to allow to study a great number of scenarios, a compromise must be found between precision of results and calculation time: one day of accident time should take less than one day of real time to simulate on a PC computer. This search of compromise is a real challenge for such integral codes. The development of the ASTEC integral code was initiated jointly by IRSN and GRS as an international reference code. The latest version 1.0 of ASTEC, including the new modules CESAR and DIVA which model the behaviour of the reactor cooling system and the core degradation, is presented here. Validation of the modules and one plant application are described

  14. Assessment of capability for modeling the core degradation in 2D geometry with ASTEC V2 integral code for VVER type of reactor

    International Nuclear Information System (INIS)

    Dimov, D.

    2011-01-01

    The ASTEC code is progressively becoming the reference European severe accident integral code through in particular the intensification of research activities carried out since 2004. The purpose of this analysis is to assess ASTEC code modelling of main phenomena arising during hypothetical severe accidents and particularly in-vessel degradation in 2D geometry. The investigation covers both early and late phase of degradation of reactor core as well as determination of corium which will enter the reactor cavity. The initial event is station back-out. In order to receive severe accident condition, failure of all active component of emergency core cooling system is apply. The analysis is focus on ICARE module of ASTEC code and particularly on so call MAGMA model. The aim of study is to determine the capability of the integral code to simulate core degradation and to determine the corium composition entering the reactor cavity. (author)

  15. Comparison of ASTEC 1.3 and ASTEC 1.3 R2 calculations in case of SBO for VVER-1000 reactor

    International Nuclear Information System (INIS)

    Atanasova, B.; Stefanova, A.; Grudev, P.

    2009-01-01

    The report presents the results from severe accident analyses performed with the both versions of ASTEC v1.3 and ASTEC v1.3R2 computer code for a VVER 1000 type of reactor. The purpose of this analysis is to assess the progress of ASTEC code modeling of main phenomena arising during hypothetical severe accidents. The final target of these analyses is to estimate the behaviour of the ASTEC code, its capability for simulation of severe accidents, including safety systems and Severe Accident Management (SAM) procedures. The analyses have been performed assuming a station blackout with simultaneous loss of HPIS, LPIS (ECCSs), EFWS and spray system due to failure of DGs. Hydro accumulators are not available. In the calculation it is assumed opening and stuck-open of PRZ relief valves. It has been organized the Fission Products path through the SEMPELL valve. It should be said that this investigation was limited to the 'in-vessel' phase of the sequence; therefore the effect of sprays on containment atmosphere has not been studied. (authors)

  16. VVER-1000 small-medium break LOCAs predictions by ASTEC

    International Nuclear Information System (INIS)

    Georgieva, J.; Stefanova, A.; Atanasova, B.; Groudev, P.; Tusheva, P.; Mladenov, I.; Dimov, D.; Passalacqua, R.

    2005-01-01

    This paper deals with an assessment of ASTEC1.1v0 code in the simulation of small and medium break LOCAs (ranging from 30mm up to 70mm equivalent diameter). The reference power plant for this analysis is a VVER-1000/V320 (e.g. Units 5 and 6 at Kozloduy NPP). A preliminary comparison with MELCOR and RELAP-SCDAP severe accident codes will be discussed. This investigation has been performed in the framework of the SARNET project (under the Euratom 6 th framework program) by the FoBAUs group (Forum of Bulgarian ASTEC users). The FoBAUs group aims at the validation of the ASTEC code in the field of severe accidents. Future activities will target the ASTEC capability (as a PSA-level 2 tool) to simulate a large range of reactor accident scenarios with intervention of safety systems (either passive systems or operated by operators). The final target is to assess Severe Accident Management (SAM) procedures for VVER-1000 reactors. The ASTEC1.1v0 code version here used is the one released in June 2004 by the French IRSN (Institut de Radioprotection et de Surete Nucleaire) and the German GRS (Gesellschaft ReactorSicherheit mbH). (author)

  17. ASTEC validation on PANDA SETH

    International Nuclear Information System (INIS)

    Bentaib, A.; Bleyer, A.

    2011-01-01

    The ASTEC code (jointly developed by IRSN and GRS, i.e. Gesellschaft fur Anlagen- und Reaktorsicherheit mbH) development is aimed to provide an integral code for the simulation of the whole course of severe accidents in Light-Water Reactors. ASTEC is a complex system of codes for reactor safety assessment. In this benchmark, only the CPA (Containment Part of ASTEC) module is used. CPA is a lumped-parameter able to represent a multi-compartments containment. It used the following main elements: zones (compartments), junctions (liquids and atmospherics) and structures. The zones are connected by junctions and contain steam, water and non condensable gases. They exchange heat with structures by different heat transfer regimes: convection, radiation and condensation. In this paper, three selected from the PANDA SETH Benchmark 9, 9bis and 25 are considered to investigate the impact of injection velocity and steam condensation on the plume shape and on gas distribution. Coarse and fine meshes were developed by considering the test facility with the two vessels DW1, DW2, and the interconnection pipe. The obtained numerical results are analyzed and compared to the experiments. The comparison shows the good agreement between experiments and calculations. (author)

  18. SARNET, a success story. Survey of major achievements on severe accidents and of knowledge capitalization within the ASTEC code

    International Nuclear Information System (INIS)

    Albiol, T.; Van Dorsselaere, J.P.; Reinke, N.

    2013-01-01

    51 organizations from Europe and Canada cooperated within SARNET (Severe Accident Research Network of Excellence) joining their capacities of research in order to resolve the most important pending issues for enhancing, in regard to Severe Accidents (SA), the safety of existing and future Nuclear Power Plants (NPPs). SARNET defines common research programmes and develops common computer codes and methodologies for safety assessment. The ASTEC integral code, jointly developed by IRSN (France) and GRS (Germany) for Light Water Reactor (LWR) source term SA evaluation, Probabilistic Safety Assessment (PSA) level-2 studies and SA management evaluation, is the main integrating component of SARNET. The scientific knowledge generated in the Corium, Source Term and Containment Topics has been integrated into the code through improved or new physical models. ASTEC constitutes now the reference European SA integral code. During the 4 and half years of SARNET, 30 partners have assessed the successive versions of the ASTEC V1 code through validation. More than 60 scientists have been trained on the code use. Validation tasks on about 65 experiments were performed to cover all physical phenomena occurring in a severe accident: circuit thermalhydraulic, core degradation, fission products (FP) release and transport, Molten-Corium-Concrete-Interaction (MCCI), and in the containment, thermalhydraulic, aerosol and iodine as well as hydrogen behaviour. The overall status of validation can be considered as good, with results often close to results of mechanistic codes. Some reach the limits of present knowledge, for instance on MCCI, and, like in most codes, an adequate model for reflooding of a degraded core is still missing. IRSN and GRS are currently preparing the new series of ASTEC V2 versions that will account for most of the needs of evolution expressed by the SARNET partners. The first version V2.0, planned for March 09, will be applicable to EPR and will include the ICARE2

  19. Enhancement of ASTEC and COCOSYS regarding fission product release during MCCI

    Energy Technology Data Exchange (ETDEWEB)

    Agethen, Kathrin [Bochum Univ. (Germany). Reactor Simulation and Safety Group

    2016-10-15

    The focus in this paper is on the enhancement of the fission product release model during molten core concrete interaction in the severe accident analysis codes ASTEC and COCOSYS. After both codes are harmonised and the model interaction as well as the input parameters are adapted, extended model approaches are implemented. These lead to an improvement of the release rates for selected semi-volatile species validated against the ACE tests under ex-vessel conditions.

  20. Continuous validation of ASTEC containment models and regression testing

    International Nuclear Information System (INIS)

    Nowack, Holger; Reinke, Nils; Sonnenkalb, Martin

    2014-01-01

    The focus of the ASTEC (Accident Source Term Evaluation Code) development at GRS is primarily on the containment module CPA (Containment Part of ASTEC), whose modelling is to a large extent based on the GRS containment code COCOSYS (COntainment COde SYStem). Validation is usually understood as the approval of the modelling capabilities by calculations of appropriate experiments done by external users different from the code developers. During the development process of ASTEC CPA, bugs and unintended side effects may occur, which leads to changes in the results of the initially conducted validation. Due to the involvement of a considerable number of developers in the coding of ASTEC modules, validation of the code alone, even if executed repeatedly, is not sufficient. Therefore, a regression testing procedure has been implemented in order to ensure that the initially obtained validation results are still valid with succeeding code versions. Within the regression testing procedure, calculations of experiments and plant sequences are performed with the same input deck but applying two different code versions. For every test-case the up-to-date code version is compared to the preceding one on the basis of physical parameters deemed to be characteristic for the test-case under consideration. In the case of post-calculations of experiments also a comparison to experimental data is carried out. Three validation cases from the regression testing procedure are presented within this paper. The very good post-calculation of the HDR E11.1 experiment shows the high quality modelling of thermal-hydraulics in ASTEC CPA. Aerosol behaviour is validated on the BMC VANAM M3 experiment, and the results show also a very good agreement with experimental data. Finally, iodine behaviour is checked in the validation test-case of the THAI IOD-11 experiment. Within this test-case, the comparison of the ASTEC versions V2.0r1 and V2.0r2 shows how an error was detected by the regression testing

  1. ASTEC code development, validation and applications for severe accident management within the CESAM European project - 15392

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Chatelard, P.; Chevalier-Jabet, K.; Nowack, H.; Herranz, L.E.; Pascal, G.; Sanchez-Espinoza, V.H.

    2015-01-01

    ASTEC, jointly developed by IRSN and GRS, is considered as the European reference code since it capitalizes knowledge from the European research on the domain. The CESAM project aims at its enhancement and extension for use in severe accident management (SAM) analysis of the nuclear power plants (NPP) of Generation II-III presently under operation or foreseen in near future in Europe, spent fuel pools included. Within the CESAM project 3 main types of research activities are performed: -) further validation of ASTEC models important for SAM, in particular for the phenomena being of importance in the Fukushima-Daichi accidents, such as reflooding of degraded cores, pool scrubbing, hydrogen combustion, or spent fuel pools behaviour; -) modelling improvements, especially for BWR or based on the feedback of validation tasks; and -) ASTEC applications to severe accident scenarios in European NPPs in order to assess prevention and mitigation measures. An important step will be reached with the next major ASTEC V2.1 version planned to be delivered in the first part of 2015. Its main improvements will concern the possibility to simulate in details the core degradation of BWR and PHWR and a model of reflooding of severely degraded cores. A new user-friendly Graphic User Interface will be available for plant analyses

  2. Radioactive releases of nuclear power plants: the code ASTEC

    International Nuclear Information System (INIS)

    Sdouz, G.; Pachole, M.

    1999-11-01

    In order to adopt potential countermeasures to protect the population during the course of an accident in a nuclear power plant a fast prediction of the radiation exposure is necessary. The basic input value for such a dispersion calculation is the source term, which is the description of the physical and chemical behavior of the released radioactive nuclides. Based on a source term data base a pilot system has been developed to determine a relevant source term and to generate the input file for the dispersion code TAMOS of the Zentralanstalt fuer Meteorologie und Geodynamik (ZAMG). This file can be sent directly as an attachment of e-mail to the TAMOS user for further processing. The source terms for 56 European nuclear power plant units are included in the pilot version of the code ASTEC (Austrian Source Term Estimation Code). The use of the system is demonstrated in an example based on an accident in the unit TEMELIN-1. In order to calculate typical core inventories for the data bank the international computer code OBIGEN 2.1 was installed and applied. The report has been completed with a discussion on the optimal data transfer. (author)

  3. Analysis of ASTEC-Na capabilities for simulating a loss of flow CABRI experiment

    International Nuclear Information System (INIS)

    Flores y Flores, A.; Matuzas, V.; Perez-Martin, S.; Bandini, G.; Ederli, S.; Ammirabile, L.; Pfrang, W.

    2016-01-01

    Highlights: • ASTEC-Na results for CABRI BI1 test have been compared with experimental data. • The ASTEC-Na calculations reached the boiling onset within the error bar of the test. • The coolant axial profile in ASTEC-Na fit almost perfectly with the experimental data. • All the calculations have a good agreement with the two-phase front downwards. • All the calculations have a worse agreement with the two-phase front upwards. - Abstract: This paper presents simulation results of the CABRI BI1 test using the code ASTEC-Na, currently under development, as well as a comparison of the results with available experimental data. The EU-JASMIN project (7th FP of EURATOM) centres on the development and validation of the new severe accident analysis code ASTEC-Na (Accident Source Term Evaluation Code) for sodium-cooled fast reactors whose owner and developer is IRSN. A series of experiments performed in the past (CABRI/SCARABEE experiments) and new experiments to be conducted in the new experimental sodium facility KASOLA have been chosen to validate the developed ASTEC-Na code. One of the in-pile experiments considered for the validation of ASTEC-Na thermal–hydraulic models is the CABRI BI1 test, a pure loss-of-flow transient using a low burnup MOX fuel pin. The experiment resulted in a channel voiding as a result of the flow coast-down leading to clad melting. Only some fuel melting took place. Results from the analysis of this test using SIMMER and SAS-SFR codes are also presented in this work to check their suitability for further code benchmarking purposes.

  4. Evaluation of Thermal Load to the Lower Head Vessel Using the ASTEC Computer Code

    International Nuclear Information System (INIS)

    Park, Raejoon; Ahn, Kwangil

    2013-01-01

    The thermal load from the corium to the lower head vessel in the APR (Advanced Power reactor) 1400 during a small break loss of coolant accident (SBLOCA) without a safety injection (SI) has been evaluated using the ASTEC (Accident Source Term Evaluation Code) computer code, which has been developed as a part of the EU (European Union)-SARNET (Severe Accident Research NET work) program. The ASTEC results predict that the reactor vessel did not fail by using an ERVC, in spite of the large melting of the reactor vessel wall in a two-layer formation case of the SBLOCA in the APR1400. The outer surface conditions of the temperature and heat transfer coefficient are not effective on the vessel geometry change, which are preliminary results. A more detailed analysis of the main parameter effects on the corium behavior in the lower plenum is necessary to evaluate the IVR-ERVC in the APR1400, in particular, for a three-layer formation of the TLFW. Comparisons of the present results with others are necessary to verify and apply them to the actual IVR-ERVC evaluation in the APR1400

  5. ASTEC-CATHARE2 benchmarks on French PWR 1300MWe reactors

    International Nuclear Information System (INIS)

    Tregoures, Nicolas; Philippot, Marc; Foucher, Laurent; Guillard, Gaetan; Fleurot, Joelle

    2009-01-01

    The French Institut de Radioprotection et de Surete Nucleaire (IRSN) is performing a level 2 Probabilistic Safety Assessment (PSA-2) on the French 1300 MWe reactors. This PSA-2 is heavily relying on the ASTEC integral computer code, jointly developed by IRSN and GRS (Germany). In order to assess the reliability and the quality of physical results of the ASTEC V1.3 code as well as the PWR 1300 MWe reference input deck, an important series of benchmarks with the French best-estimate thermal-hydraulic code CATHARE 2 V2.5 has been performed on 14 different severe accident scenarios. The present paper details 2 out of the 14 studied scenarios: a 12 inches cold leg Loss of Coolant Accident (LOCA) and a 2 tubes Steam Generator Tube Rupture (SGTR). The thermal-hydraulic behavior of the primary and secondary circuits is thoroughly investigated and the ASTEC results of the core degradation phase are presented. Overall, the thermal-hydraulic behavior given by the ASTEC V1.3 is in very good agreement with the CATHARE 2 V2.5 results. (author)

  6. Main modelling features of the ASTEC V2.1 major version

    International Nuclear Information System (INIS)

    Chatelard, P.; Belon, S.; Bosland, L.; Carénini, L.; Coindreau, O.; Cousin, F.; Marchetto, C.; Nowack, H.; Piar, L.; Chailan, L.

    2016-01-01

    Highlights: • Recent modelling improvements of the ASTEC European severe accident code are outlined. • Key new physical models now available in the ASTEC V2.1 major version are described. • ASTEC progress towards a multi-design reactor code is illustrated for BWR and PHWR. • ASTEC strong link with the on-going EC CESAM FP7 project is emphasized. • Main remaining modelling issues (on which IRSN efforts are now directing) are given. - Abstract: A new major version of the European severe accident integral code ASTEC, developed by IRSN with some GRS support, was delivered in November 2015 to the ASTEC worldwide community. Main modelling features of this V2.1 version are summarised in this paper. In particular, the in-vessel coupling technique between the reactor coolant system thermal-hydraulics module and the core degradation module has been strongly re-engineered to remove some well-known weaknesses of the former V2.0 series. The V2.1 version also includes new core degradation models specifically addressing BWR and PHWR reactor types, as well as several other physical modelling improvements, notably on reflooding of severely damaged cores, Zircaloy oxidation under air atmosphere, corium coolability during corium concrete interaction and source term evaluation. Moreover, this V2.1 version constitutes the back-bone of the CESAM FP7 project, which final objective is to further improve ASTEC for use in Severe Accident Management analysis of the Gen.II–III nuclear power plants presently under operation or foreseen in near future in Europe. As part of this European project, IRSN efforts to continuously improve both code numerical robustness and computing performances at plant scale as well as users’ tools are being intensified. Besides, ASTEC will continue capitalising the whole knowledge on severe accidents phenomenology by progressively keeping physical models at the state of the art through a regular feed-back from the interpretation of the current and

  7. Overview of the independent ASTEC V2.0 validation by SARNET partners

    International Nuclear Information System (INIS)

    Chatelard, Patrick; Arndt, Siegfried; Atanasova, Boryana; Bandini, Giacomino; Bleyer, Alexandre; Brähler, Thimo; Buck, Michael; Kljenak, Ivo; Kujal, Bohumir

    2014-01-01

    Significant efforts are put into the assessment of the severe accident integral code ASTEC, jointly developed since several years by IRSN and GRS, either through comparison with results of the most important international experiments or through benchmarks with other severe accident simulation codes on plant applications. These efforts are done in first priority by the code developers’ organisations, IRSN and GRS, and also by numerous partners, in particular in the frame of the SARNET European network. The first version of the new series ASTEC V2 had been released in July 2009 to SARNET partners. Two subsequent V2.0 code revisions, including several modelling improvements, have been then released to the same partners, respectively in 2010 and 2011. This paper summarises first the approach of ASTEC validation vs. experiments, along with a description of the validation matrix, and presents then a few examples of applications of the ASTEC V2.0-rev1 version carried out in 2011 by the SARNET users. These calculation examples are selected in a way to cover diverse aspects of severe accident phenomenology, i.e. to cover both in-vessel and ex-vessel processes, in order to provide a good picture of the current ASTEC V2 capabilities. Finally, the main lessons drawn from this joint validation task are summarised, along with an evaluation of the current physical modelling relevance and thus an identification of the ASTEC V2.0 validity domain

  8. Applications of ASTEC integral code on a generic CANDU 6

    Energy Technology Data Exchange (ETDEWEB)

    Radu, Gabriela, E-mail: gabriela.radu@nuclear.ro [Institute for Nuclear Research, Campului 1, 115400 Mioveni, Arges (Romania); Prisecaru, Ilie [Power Engineering Department, University “Politehnica” of Bucharest, 313 Splaiul Independentei, Bucharest (Romania)

    2015-05-15

    Highlights: • Short overview of the models included in the ASTEC MCCI module. • MEDICIS/CPA coupled calculations for a generic CANDU6 reactor. • Two cases taking into account different pool/concrete interface models. - Abstract: In case of a hypothetical severe accident in a nuclear power plant, the corium consisting of the molten reactor core and internal structures may flow onto the concrete floor of containment building. This would cause an interaction between the molten corium and the concrete (MCCI), in which the heat transfer from the hot melt to the concrete would cause the decomposition and the ablation of the concrete. The potential hazard of this interaction is the loss of integrity of the containment building and the release of fission products into the environment due to the possibility of a concrete foundation melt-through or containment over-pressurization by the gases produced from the decomposition of the concrete or by the inflammation of combustible gases. In the safety assessment of nuclear power plants, it is necessary to know the consequences of such a phenomenon. The paper presents an example of application of the ASTECv2 code to a generic CANDU6 reactor. This concerns the thermal-hydraulic behaviour of the containment during molten core–concrete interaction in the reactor vault. The calculations were carried out with the help of the MEDICIS MCCI module and the CPA containment module of ASTEC code coupled through a specific prediction–correction method, which consists in describing the heat exchanges with the vault walls and partially absorbent gases. Moreover, the heat conduction inside the vault walls is described. Two cases are presented in this paper taking into account two different heat transfer models at the pool/concrete interface and siliceous concrete. The corium pool configuration corresponds to a homogeneous configuration with a detailed description of the upper crust.

  9. Validation of ASTEC core degradation and containment models

    International Nuclear Information System (INIS)

    Kruse, Philipp; Brähler, Thimo; Koch, Marco K.

    2014-01-01

    Ruhr-Universitaet Bochum performed in a German funded project validation of in-vessel and containment models of the integral code ASTEC V2, jointly developed by IRSN (France) and GRS (Germany). In this paper selected results of this validation are presented. In the in-vessel part, the main point of interest was the validation of the code capability concerning cladding oxidation and hydrogen generation. The ASTEC calculations of QUENCH experiments QUENCH-03 and QUENCH-11 show satisfactory results, despite of some necessary adjustments in the input deck. Furthermore, the oxidation models based on the Cathcart–Pawel and Urbanic–Heidrick correlations are not suitable for higher temperatures while the ASTEC model BEST-FIT based on the Prater–Courtright approach at high temperature gives reliable enough results. One part of the containment model validation was the assessment of three hydrogen combustion models of ASTEC against the experiment BMC Ix9. The simulation results of these models differ from each other and therefore the quality of the simulations depends on the characteristic of each model. Accordingly, the CPA FRONT model, corresponding to the simplest necessary input parameters, provides the best agreement to the experimental data

  10. Simulation of experiment on aerosol behaviour at severe accident conditions in the LACE experimental facility with the ASTEC CPA code

    International Nuclear Information System (INIS)

    Kljenak, I.; Mavko, B.

    2007-01-01

    The experiment LACE LA4 on thermal-hydraulics and aerosol behavior in a nuclear power plant containment, which was performed in the LACE experimental facility, was simulated with the ASTEC CPA module of the severe accident computer code ASTEC V1.2. The specific purpose of the work was to assess the capability of the module (code) to simulate thermal-hydraulic conditions and aerosol behavior in the containment of a light-water-reactor nuclear power plant at severe accident conditions. The test was simulated with boundary conditions, described in the experiment report. Results of thermal-hydraulic conditions in the test vessel, as well as dry aerosol concentrations in the test vessel atmosphere, are compared to experimental results and analyzed. (author)

  11. On-going activities in the European JASMIN project for the development and validation of ASTEC-Na SFR safety simulation code - 15072

    International Nuclear Information System (INIS)

    Girault, N.; Cloarec, L.; Herranz, L.; Bandini, G.; Perez-Martin, S.; Ammirabile, L.

    2015-01-01

    The 4-year JASMIN collaborative project (Joint Advanced Severe accidents Modelling and Integration for Na-cooled fast reactors), started in Dec.2011 in the frame of the 7. Framework Programme of the European Commission. It aims at developing a new European simulation code, ASTEC-Na, dealing with the primary phase of SFR core disruptive accidents. The development of a new code, based on a robust advanced simulation tool and able to encompass the in-vessel and in-containment phenomena occurring during a severe accident is indeed of utmost interest for advanced and innovative future SFRs for which an enhanced safety level will be required. This code, based on the ASTEC European code system developed by IRSN and GRS for severe accidents in water-cooled reactors, is progressively integrating and capitalizing the state-of-the-art knowledge of SFR accidents through physical model improvement or development of new ones. New models are assessed on in-pile (CABRI, SCARABEE etc...) and out-of pile experiments conducted during the 70's-80's and code-o-code benchmarking with current accident simulation tools for SFRs is also conducted. During the 2 and a half first years of the project, model specifications and developments were conducted and the validation test matrix was built. The first version of ASTEC-Na available in early 2014 already includes a thermal-hydraulics module able to simulate single and two-phase sodium flow conditions, a zero point neutronic model with simple definition of channel and axial dependences of reactivity feedbacks and models derived from SCANAIR IRSN code for simulating fuel pin thermo-mechanical behaviour and fission gas release/retention. Meanwhile, models have been developed in the source term area for in-containment particle generation and particle chemical transformation, but their implementation is still to be done. As a first validation step, the ASTEC-Na calculations were satisfactorily compared to thermal-hydraulics experimental

  12. Review of current severe accident management approaches in Europe and identification of related modelling requirements for the computer code ASTEC V2.1

    Energy Technology Data Exchange (ETDEWEB)

    Hermsmeyer, S. [European Commission JRC, Petten (Netherlands). Inst. for Energy and Transport; Herranz, L.E.; Iglesias, R. [CIEMAT, Madrid (Spain); and others

    2015-07-15

    The severe accident at the Fukushima-Daiichi nuclear power plant (NPP) has led to a worldwide review of nuclear safety approaches and is bringing a refocussing of R and D in the field. To support these efforts several new Euratom FP7 projects have been launched. The CESAM project focuses on the improvement of the ASTEC computer code. ASTEC is jointly developed by IRSN and GRS and is considered as the European reference code for Severe Accident Analyses since it capitalizes knowledge from the extensive Euro-pean R and D in the field. The project aims at the code's enhancement and extension for use in Severe Accident Management (SAM) analysis of the NPPs of Generation II-III presently under operation or foreseen in the near future in Europe, spent fuel pools included. The work reported here is concerned with the importance, for the further development of the code, of SAM strategies to be simulated. To this end, SAM strategies applied in the EU have been compiled. This compilation is mainly based on the public information made available in the frame of the EU ''stress tests'' for NPPs and has been complemented by information pro-vided by the different CESAM partners. The context of SAM is explained and the strategies are presented. The modelling capabilities for the simulation of these strategies in the current production version 2.0 of ASTEC are discussed. Furthermore, the requirements for the next version of ASTEC V2.1 that is supported in the CESAM project are highlighted. They are a necessary complement to the list of code improvements that is drawn from consolidating new fields of application, like SFP and BWR model enhancements, and from new experimental results on severe accident phenomena.

  13. Review of current severe accident management approaches in Europe and identification of related modelling requirements for the computer code ASTEC V2.1

    International Nuclear Information System (INIS)

    Hermsmeyer, S.

    2015-01-01

    The severe accident at the Fukushima-Daiichi nuclear power plant (NPP) has led to a worldwide review of nuclear safety approaches and is bringing a refocussing of R and D in the field. To support these efforts several new Euratom FP7 projects have been launched. The CESAM project focuses on the improvement of the ASTEC computer code. ASTEC is jointly developed by IRSN and GRS and is considered as the European reference code for Severe Accident Analyses since it capitalizes knowledge from the extensive Euro-pean R and D in the field. The project aims at the code's enhancement and extension for use in Severe Accident Management (SAM) analysis of the NPPs of Generation II-III presently under operation or foreseen in the near future in Europe, spent fuel pools included. The work reported here is concerned with the importance, for the further development of the code, of SAM strategies to be simulated. To this end, SAM strategies applied in the EU have been compiled. This compilation is mainly based on the public information made available in the frame of the EU ''stress tests'' for NPPs and has been complemented by information pro-vided by the different CESAM partners. The context of SAM is explained and the strategies are presented. The modelling capabilities for the simulation of these strategies in the current production version 2.0 of ASTEC are discussed. Furthermore, the requirements for the next version of ASTEC V2.1 that is supported in the CESAM project are highlighted. They are a necessary complement to the list of code improvements that is drawn from consolidating new fields of application, like SFP and BWR model enhancements, and from new experimental results on severe accident phenomena.

  14. Reactor cooling systems thermal-hydraulic assessment of the ASTEC V1.3 code in support of the French IRSN PSA-2 on the 1300 MWe PWRs

    International Nuclear Information System (INIS)

    Tregoures, Nicolas; Philippot, Marc; Foucher, Laurent; Guillard, Gaetan; Fleurot, Joelle

    2010-01-01

    The French Institut de Radioprotection et de Surete Nucleaire (IRSN) is performing a level 2 Probabilistic Safety Assessment (PSA-2) on the French 1300 MWe PWRs. This PSA-2 study is relying on the ASTEC integral computer code, jointly developed by IRSN and GRS (Germany). In order to assess the reliability and the quality of physical results of the ASTEC V1.3 code as well as the PWR 1300 MWe reference input deck, a wide-ranging series of comparisons with the French best-estimate thermal-hydraulic code CATHARE 2 V2.5 has been performed on 14 different severe-accident scenarios. The present paper details 4 out of the 14 studied scenarios: a 12-in. cold leg Loss of Coolant Accident (LOCA), a 2-tube Steam Generator Tube Rupture (SGTR), a 12-in. Steam Line Break (SLB) and a total Loss of Feed Water scenario (LFW). The thermal-hydraulic behavior of the primary and secondary circuits is thoroughly investigated and compared to the CATAHRE 2 V2.5 results. The ASTEC results of the core degradation phase are also presented. Overall, the thermal-hydraulic behavior given by the ASTEC V1.3 is in very good agreement with the CATHARE 2 V2.5 results.

  15. Analysis of the COLIMA CA-U3 test using the ELSA module of ASTEC

    International Nuclear Information System (INIS)

    Godin-Jacqmin, L.; Journeau, C.; Piluso, P.

    2006-01-01

    The main purpose of this study is to calculate the COLIMA CA-U3 experimental test with the ELSA module of the ASTEC code. This experimental test was performed to represent the fission product and structural material releases from a VVER-440 magma type configuration. Thus, some additional work is also done on test result analyses and corresponding ASTEC parameter usage to model as closely as possible the test configuration. Code results on fission product releases are compared to experimental results in a qualitative way for all elements that can be evaluated by the ASTEC code. Sensitivity cases are also performed on the gas flow rate carrying the fission product. (author)

  16. ASTEC participation in the international standard problem on KAEVER

    International Nuclear Information System (INIS)

    Spitz, P.; Van Dorsselaere, J.P.; Schwinges, B.; Schwarz, S.

    2001-01-01

    The objectives of the International Standard Problem no 44 was aerosol depletion behaviour under severe accident conditions in a LWR containment examined in the KAEVER test facility of Battelle (Germany). Nine organisations participated with 5 different codes in the ISP44, including a joint participation of GRS and IPSN with the integral code ASTEC (and in particular the CPA module) they have commonly developed. Five tests were selected from the KAEVER test matrix: K123, K148, K186 and K188 as open standard problems and the three-component test K187 as blind standard problem. All these tests were performed in supersaturated conditions and with slight fog formation, which are the most ambitious conditions for the coupled problem of thermal hydraulics and aerosol processes. The comparison between calculation and test showed a good agreement for all the tests with respect to the thermal-hydraulic conditions in the vessel, i.e. total pressure, atmosphere temperature, sump water and nitrogen mass, etc.... As for aerosol depletion, the ASTEC results were in a good overall agreement with the measured data. The code in particular predicted well the fast depletion of the hygroscopic and mixed aerosols and the slow depletion of insoluble silver aerosol. The important effects of bulk condensation, solubility and the Kelvin effect on the aerosol depletion were well predicted. However the code overestimation of steam condensation on hygroscopic aerosols in supersaturated conditions indicates that some slight improvements of the appropriate ASTEC models are needed in the future. In the final ISP44 workshop, the deviations of the ASTEC results with respect to the experiments were considered to be small compared to those of most other codes. (authors)

  17. Assessment of the integral code ASTEC with respect to the late in-vessel phase of core degradation

    International Nuclear Information System (INIS)

    D'Alessandro, Christophe; Starflinger, Joerg

    2014-01-01

    The integral code ASTEC is being developed jointly by GRS and IRSN as the European reference code for severe accidents. In the EU project CESAM it is foreseen to assess the capabilities of ASTEC to deal with a broad range of reactor designs (PWR, BWR, VVER, CANDU, Gen III+, etc.) and especially to model and capture the effect of severe accident mitigation measures. This requires a physically sound and sufficiently accurate modelling of the processes and phenomena that govern the course of the accident, and the modelling has to be validated to a sufficient extent. Concentrating on the in-vessel aspects of severe accidents, the present paper addresses these requirements by presenting results of ASTEC calculations for relevant experiments that cover the major physical phenomena during core degradation (melting and relocation of the fuel, oxidation, molten corium pool formation and its coolability in the lower plenum once it slumped from the core region). The assessment of models for bundle degradation is based on CORA (13 and W2). CORA represented a bundle of non-irradiated, electrically heated UO 2 -rods. Melt progression in strongly degraded geometry is addressed in the PHEBUS-FTP4 experiment carried out with irradiated fuel in debris bed configuration. The validation of molten pool modelling is based on BALI and RASPLAV-Salt experiments. The BALI-facility consists of a full-scale slice of lower plenum (allowing experiments at prototypical Rayleigh numbers) and utilizes uniformly heated water as simulant for corium. The RASPLAV experiments use a scaled-down slice of the lower head. Use of non-eutectic molten salt as simulant should address the effect of a significant solidification range typical for real corium. Calculation results of ASTEC are discussed in comparison with experimental measurements. Further, questions concerning the extrapolation of findings from validation to reactor application are critically discussed, concerning e.g. choice of model parameters

  18. Comparisons for ESTA-Task3: ASTEC, CESAM and CLÉS

    Science.gov (United States)

    Christensen-Dalsgaard, J.

    The ESTA activity under the CoRoT project aims at testing the tools for computing stellar models and oscillation frequencies that will be used in the analysis of asteroseismic data from CoRoT and other large-scale upcoming asteroseismic projects. Here I report results of comparisons between calculations using the Aarhus code (ASTEC) and two other codes, for models that include diffusion and settling. It is found that there are likely deficiencies, requiring further study, in the ASTEC computation of models including convective cores.

  19. Validation of CESAR Thermal-hydraulic Module of ASTEC V1.2 Code on BETHSY Experiments

    Science.gov (United States)

    Tregoures, Nicolas; Bandini, Giacomino; Foucher, Laurent; Fleurot, Joëlle; Meloni, Paride

    The ASTEC V1 system code is being jointly developed by the French Institut de Radioprotection et Sûreté Nucléaire (IRSN) and the German Gesellschaft für Anlagen und ReaktorSicherheit (GRS) to address severe accident sequences in a nuclear power plant. Thermal-hydraulics in primary and secondary system is addressed by the CESAR module. The aim of this paper is to present the validation of the CESAR module, from the ASTEC V1.2 version, on the basis of well instrumented and qualified integral experiments carried out in the BETHSY facility (CEA, France), which simulates a French 900 MWe PWR reactor. Three tests have been thoroughly investigated with CESAR: the loss of coolant 9.1b test (OECD ISP N° 27), the loss of feedwater 5.2e test, and the multiple steam generator tube rupture 4.3b test. In the present paper, the results of the code for the three analyzed tests are presented in comparison with the experimental data. The thermal-hydraulic behavior of the BETHSY facility during the transient phase is well reproduced by CESAR: the occurrence of major events and the time evolution of main thermal-hydraulic parameters of both primary and secondary circuits are well predicted.

  20. Development and assessment of ASTEC code for severe accident simulation

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Pignet, S.; Seropian, C.; Montanelli, T.; Giordano, P.; Jacq, F.; Schwinges, B.

    2005-01-01

    Full text of publication follows: The ASTEC integral code, jointly developed by IRSN and GRS since several years for evaluation of source term during a severe accident (SA) in a Light Water Reactor, will play a central role in the SARNET network of excellence of the 6. Framework Programme (FwP) of the European Commission which started in spring 2004. It should become the reference European SA integral code in the next years. The version V1.1, released in June 2004, allows to model most of the main physical phenomena (except steam explosion) near or at the state of the art. In order to allow to study a great number of scenarios, a compromise must be found between precision of results and calculation time: one day of accident time usually takes less than one day of real time to be simulated on a PC computer. Important efforts are being made on validation by covering more than 30 reference experiments, often International Standard Problems from OECD (CORA, LOFT, PACTEL, BETA, VANAM, ACE-RTF, Phebus.FPT1...). The code is also used for the detailed interpretation of all the integral Phebus.FP experiments. Eighteen European partners performed a first independent evaluation of the code capabilities in 2000-03 within the frame of the EVITA 5. FwP project on one hand by comparison to experiments and on another hand by benchmarking with MAAP4 and MELCOR integral codes on plant applications on PWR and VVER. Their main conclusions were the needs of improvement of code robustness (especially the 2 new modules CESAR and DIVA simulating respectively circuit thermal hydraulics and core degradation) and of post-processing tools. Some improvements have already been achieved in the latest version V 1.1 on these two aspects. A new module MEDICIS devoted to Molten Core Concrete Interaction (MCCI) is implemented in this version, with a tight coupling to the containment thermal hydraulics module CPA. The paper presents a detailed analysis of a TMLB sequence on a French 900 MWe PWR, from

  1. Modelling of QUENCH-03 and QUENCH-06 Experiments Using RELAP/SCDAPSIM and ASTEC Codes

    Directory of Open Access Journals (Sweden)

    Tadas Kaliatka

    2014-01-01

    Full Text Available To prevent total meltdown of the uncovered and overheated core, the reflooding with water is a necessary accident management measure. Because these actions lead to the generation of hydrogen, which can cause further problems, the related phenomena are investigated performing experiments and computer simulations. In this paper, for the experiments of loss of coolant accidents, performed in Forschungszentrum Karlsruhe, QUENCH-03 and QUENCH-06 are modelled using RELAP5/SCDAPSIM and ASTEC codes. The performed benchmark allowed analysing different modelling features. The recommendations for the model development are presented.

  2. ASTEC and MELCOR comparison for a VVER-1000 60 mm small break LOCA

    International Nuclear Information System (INIS)

    Georgieva, J.; Stefanova, A.; Groudev, P.; Tusheva, P.; Mladenov, I.; Dimov, D.; Passalacqua, R.

    2005-01-01

    In this paper a comparison between severe accident calculations performed for a WWER 1000 with the ASTEC1.1v0 and MELCOR 1.8.5 computer codes for a small break LOCA (ID 60 mm) without intervention of hydro accumulators is presented. This investigation has been performed in the framework of the SARNET project under the EURATOM 6th framework program. Once the accident sequence scenario is specified, both codes (MELCORE and ASTEC) are able to determine the core and containment damaged states, to estimate the release of radionuclides from the fuel as well as from the primary circuit and containment. Theses results are used to estimate the maximum period of the time during which the personnel could still take particular decisions in order to mitigate such an accident. The aim of the performed analysis is to estimate the discrepancy between ASTEC and MELCORE 1.8.5 calculations. Such discrepancies will be studied, if the case, proposal for ASTEC improvements will be made. Also the ASTEC capability to simulate specific reactor accident scenarios and/or particular safety systems will be tested. The final target is to propose severe accident management procedure for WWER 1000 reactors. In conclusions, the analysis for a small break LOCA (ID 60 mm without hydroelectricities) has shown some discrepancies between ASTEC and MELCORE especially during the degradation of the core. Further analyses are planed in which the MELCORE temperature 'set point' for core degradation (2520 K) will be progressively increased to approach the ASTEC one (which has been estimated to be about 3200 K). The comparison of the new results will allow a better evaluation of the in-vessel models implemented in ASTEC

  3. Development and validation of the ASTEC-Na thermal-hydraulic models

    Energy Technology Data Exchange (ETDEWEB)

    Herranz, L. E.; Perez, S.; Bandini, G.; Jacq, F.; Parisi, C.; Berna, C.

    2014-07-01

    Last years the interest in sodium-cooled fast reactors (SFR) has been fostered worldwide by the search for higher nuclear energy sustainability. This has been reflected in the various international initiatives like GEN-IV International Forum, INPRO or ESNII platforms. At the same time, innovative nuclear reactor designs, particularly SFR, are aiming at even higher safety standards than current LWRs. A proof of it is the consideration of severe accidents since the earliest stages of reactor design. commonalities of LWR and SFR severe accident scenarios suggest that some of the knowledge achieved in the LWR arena might be applicable to some extent to SFRs. This is the spirit underneath of the EU-JASMIN project, which generic goal is developing the ASTEC-Na code from the LWR ASTEC platform. This will entail to t extend and adapt some existing models as well as to implement new ones in all the areas covered, from neutronics and pin thermo-mechanics and pin thermo-mechanics to the in-containment source term behavior by these, going through the indispensable Na thermal-hydraulics. (Author)

  4. Specific validation of COCOSYS and ASTEC and generic application. Final report; Gezielte Validierung von COCOSYS und ASTEC sowie generische Anwendungsrechnungen mit diesen Rechenprogrammen. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Klein-Hessling, W.; Arndt, S.; Erdmann, W.; and others

    2010-07-15

    In connection with the provision of tools for the assessment of incident and accident sequences and of accident management measures in nuclear power plants, the Federal Ministry of Economics and Technology (BMWi) sponsored in this project a further validation of the COCOSYS (Containment Code System) code system and the Franco- German ASTEC (Accident Source Term Evaluation Code) integral code. COCOSYS is being developed and validated for the comprehensive simulation of severe accidents in a light-water reactor (LWR) containment as well as analytical monitoring of experiments. The general objective is the simulation of all relevant processes and conditions in the containment during the process of a severe accident (including design basis accidents). This is to include also the consideration of all relevant interactions between the various phenomena. ASTEC is being jointly developed by IRSN and GRS with the aim to provide a fast running code for the calculation of the entire sequence of a severe accident in a light-water reactor, starting from the initiating event including the release of fission products into the environment. The code's fields of application are level-2 probabilistic safety analyses, the analysis of incident and accident sequences, uncertainty and sensitivity analyses as well as the analytical evaluation of experiments. The performed work within the COCOSYS project involves the validation of the new iodine module AIM-3 as well as the corresponding monitoring of iodine experiments inside the THAI facility. Main focus of these experiments was the interaction of iodine with steel and paint, radiolytic interactions and iodine ozone reaction. The extensions of fire simulations with COCOSYS regarding plume simulation and soot transport have been examined successfully on the basis of further experiments within the OECD PRISME project. A further main focus is the combined use and comparison of calculated results of COCOSYS, a lumped parameter code, and

  5. Specific validation of COCOSYS and ASTEC and generic application. Final report; Gezielte Validierung von COCOSYS und ASTEC sowie generische Anwendungsrechnungen mit diesen Rechenprogrammen. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Klein-Hessling, W.; Arndt, S.; Erdmann, W.; others, and

    2010-07-15

    In connection with the provision of tools for the assessment of incident and accident sequences and of accident management measures in nuclear power plants, the Federal Ministry of Economics and Technology (BMWi) sponsored in this project a further validation of the COCOSYS (Containment Code System) code system and the Franco- German ASTEC (Accident Source Term Evaluation Code) integral code. COCOSYS is being developed and validated for the comprehensive simulation of severe accidents in a light-water reactor (LWR) containment as well as analytical monitoring of experiments. The general objective is the simulation of all relevant processes and conditions in the containment during the process of a severe accident (including design basis accidents). This is to include also the consideration of all relevant interactions between the various phenomena. ASTEC is being jointly developed by IRSN and GRS with the aim to provide a fast running code for the calculation of the entire sequence of a severe accident in a light-water reactor, starting from the initiating event including the release of fission products into the environment. The code's fields of application are level-2 probabilistic safety analyses, the analysis of incident and accident sequences, uncertainty and sensitivity analyses as well as the analytical evaluation of experiments. The performed work within the COCOSYS project involves the validation of the new iodine module AIM-3 as well as the corresponding monitoring of iodine experiments inside the THAI facility. Main focus of these experiments was the interaction of iodine with steel and paint, radiolytic interactions and iodine ozone reaction. The extensions of fire simulations with COCOSYS regarding plume simulation and soot transport have been examined successfully on the basis of further experiments within the OECD PRISME project. A further main focus is the combined use and comparison of calculated results of COCOSYS, a lumped parameter code, and CFX

  6. Fission product release from nuclear fuel II. Validation of ASTEC/ELSA on analytical and large scale experiments

    International Nuclear Information System (INIS)

    Brillant, G.; Marchetto, C.; Plumecocq, W.

    2013-01-01

    Highlights: • A wide range of experiments is presented for the ASTEC/ELSA code validation. • Analytical tests such as AECL, ORNL and VERCORS are considered. • A large-scale experiment, PHEBUS FPT1, is considered. • The good agreement with measurements shows the efficiency of the ASTEC modelling. • Improvements concern the FP release modelling from MOX and high burn-up UO 2 fuels. - Abstract: This article is the second of two articles dedicated to the mechanisms of fission product release from a degraded core. The models of fission product release from nuclear fuel in the ASTEC code have been described in detail in the first part of this work (Brillant et al., this issue). In this contribution, the validation of ELSA, the module of ASTEC that deals with fission product and structural material release from a degraded core, is presented. A large range of experimental tests, with various temperature and conditions for the fuel surrounding atmosphere (oxidising and reducing), is thus simulated with the ASTEC code. The validation database includes several analytical experiments with both bare fuel (e.g. MCE1 experiments) and cladded fuel (e.g. HCE3, VERCORS). Furthermore, the PHEBUS large-scale experiments are used for the validation of ASTEC. The rather satisfactory comparison between ELSA calculations and experimental measurements demonstrates the efficiency of the analytical models to describe fission product release in severe accident conditions

  7. Study on severe accidents and countermeasures for WWER-1000 reactors using the integral code ASTEC

    International Nuclear Information System (INIS)

    Tusheva, P.; Schaefer, F.; Altstadt, E.; Kliem, S.; Reinke, N.

    2011-01-01

    The research field focussing on the investigations and the analyses of severe accidents is an important part of the nuclear safety. To maintain the safety barriers as long as possible and to retain the radioactivity within the airtight premises or the containment, to avoid or mitigate the consequences of such events and to assess the risk, thorough studies are needed. On the one side, it is the aim of the severe accident research to understand the complex phenomena during the in- and ex-vessel phase, involving reactor-physics, thermal-hydraulics, physicochemical and mechanical processes. On the other side the investigations strive for effective severe accident management measures. This paper is focused on the possibilities for accident management measures in case of severe accidents. The reactor pressure vessel is the last barrier to keep the molten materials inside the reactor, and thus to prevent higher loads to the containment. To assess the behaviour of a nuclear power plant during transient or accident conditions, computer codes are widely used, which have to be validated against experiments or benchmarked against other codes. The analyses performed with the integral code ASTEC cover two accident sequences which could lead to a severe accident: a small break loss of coolant accident and a station blackout. The results have shown that in case of unavailability of major active safety systems the reactor pressure vessel would ultimately fail. The discussed issues concern the main phenomena during the early and late in-vessel phase of the accident, the time to core heat-up, the hydrogen production, the mass of corium in the reactor pressure vessel lower plenum and the failure of the reactor pressure vessel. Additionally, possible operator's actions and countermeasures in the preventive or mitigative domain are addressed. The presented investigations contribute to the validation of the European integral severe accidents code ASTEC for WWER-1000 type of reactors

  8. Application of ASTEC, MELCOR, and MAAP Computer Codes for Thermal Hydraulic Analysis of a PWR Containment Equipped with the PCFV and PAR Systems

    Directory of Open Access Journals (Sweden)

    Siniša Šadek

    2017-01-01

    Full Text Available The integrity of the containment will be challenged during a severe accident due to pressurization caused by the accumulation of steam and other gases and possible ignition of hydrogen and carbon monoxide. Installation of a passive filtered venting system and passive autocatalytic recombiners allows control of the pressure, radioactive releases, and concentration of flammable gases. Thermal hydraulic analysis of the containment equipped with dedicated passive safety systems after a hypothetical station blackout event is performed for a two-loop pressurized water reactor NPP with three integral severe accident codes: ASTEC, MELCOR, and MAAP. MELCOR and MAAP are two major US codes for severe accident analyses, and the ASTEC code is the European code, joint property of Institut de Radioprotection et de Sûreté Nucléaire (IRSN, France and Gesellschaft für Anlagen und Reaktorsicherheit (GRS, Germany. Codes’ overall characteristics, physics models, and the analysis results are compared herein. Despite considerable differences between the codes’ modelling features, the general trends of the NPP behaviour are found to be similar, although discrepancies related to simulation of the processes in the containment cavity are also observed and discussed in the paper.

  9. Analyses with ASTEC related to release of FPs and aerosol transport in case of SBLOCA For WWER 1000

    International Nuclear Information System (INIS)

    Atanasova, B.; Stefanova, A.; Groudev, P.

    2008-01-01

    The objective of this paper is to present the results obtained from performing the calculations with ASTEC computer code for the Source Term evaluation for specific severe accident transient. The calculations have been performed with the new version of ASTEC. The ASTEC 1.3 R2 code version is released by the French IRSN (Institut de Radioprotection at de surete nucleaire) by the end of 2007. The sequences include the release of fission products into the reactor containment and environment and transport of fission products. The analyses proposed here are performed to simulate radioactive products release through the cold leg of SG under accidental conditions. This investigation has been performed in the framework of the SARNET project (under the EURATOM 6th framework program) by the FoBAUs group (Forum of Bulgarian ASTEC users). (authors)

  10. Validation of ASTEC v1.0 computer code against FPT2 test

    International Nuclear Information System (INIS)

    Mladenov, I.; Tusheva, P.; Kalchev, B.; Dimov, D.; Ivanov, I.

    2005-01-01

    The aim of the work is by various nodalization schemes of the model to investigate the ASTEC v1.0 computer code sensitivity and to validate the code against PHEBUS - FPT2 experiment. This code is used for severe accident analysis. The aim corresponds to the main technical objective of the experiment which is to contribute to the validation of models and computer codes to be used for the calculation of the source term in case of a severe accident in a Light Water Reactor. The objective's scope of the FPT2 is large - separately for the bundle, the experimental circuit and the containment. Additional objectives are to characterize aerosol sizing and deposition processes, and also potential FP poisoning effects on hydrogen recombiner coupons exposed to containment atmospheric conditions representative of a LWR severe accident. The analyses of the results of the performed calculations show a good accordance with the reference case calculations, and then with the experimental data. Some differences in the calculations for the thermal behavior appear locally during the oxidation phase and the heat-up phase. There is very good confirmation regarding the volatile and semi-volatile fission products release from the fuel pellets. Important for analysis of the process is the final axial distribution of the mass of fuel relocation obtained at the end of the calculation

  11. Simulation of VVER MCCI reactor test case with ASTEC V2/MEDICIS computer code

    International Nuclear Information System (INIS)

    Stefanova, A.; Grudev, P.; Gencheva, R.

    2011-01-01

    This paper presents an application of the ASTEC v2, module MEDICIS for simulation of VVER Molten core concrete interaction test (MCCI) case without water injection. The main purpose of performed calculation is verification and improvement of module MEDICIS/ASTECv2 for better simulation of core concrete interaction processes. The VVER-1000 reference nuclear power plant was chosen as SARNET2 benchmark MCCI test-case. The initial conditions for MCCI test are taken after SBO scenario calculated with ASTEC version 1.3R2 by INRNE. (authors)

  12. Stand-Alone Containment Analysis of the Phébus FPT Tests with the ASTEC and the MELCOR Codes: The FPT-0 Test

    Directory of Open Access Journals (Sweden)

    Bruno Gonfiotti

    2017-01-01

    Full Text Available The integral Phébus tests were probably one of the most important experimental campaigns performed to investigate the progression of severe accidents in light water reactors. In these tests, the degradation of a PWR fuel bundle was investigated employing different control rod materials and burn-up levels in strongly or weakly oxidizing conditions. From the results of such tests, numerical codes such as ASTEC and MELCOR have been developed to describe the evolution of a severe accident. After the termination of the experimental Phébus campaign, these two codes were furthermore expanded. Therefore, the aim of the present work is to reanalyze the first Phébus test (FPT-0 employing the updated ASTEC and MELCOR versions to ensure that the new improvements introduced in such codes allow also a better prediction of these Phébus tests. The analysis focuses on the stand-alone containment aspects of this test, and the paper summarizes the main thermal-hydraulic results and presents different sensitivity analyses carried out on the aerosols and fission products behavior. This paper is part of a series of publications covering the four executed Phébus tests employing a solid PWR fuel bundle: FPT-0, FPT-1, FPT-2, and FPT-3.

  13. Comparative accident analyses for a KONVOI-type PWR using the integral codes ASTEC V1.33 and MELCOR 1.8.6

    International Nuclear Information System (INIS)

    Reinke, Nils; Erdmann, Walter; Nowack, Holger; Sonnenkalb, Martin

    2010-08-01

    In the frame of the project RS1180 funded by the German Federal Ministry for Economics and Technology (BMWi) calculations have been carried out with the integral code ASTEC V1.33 p3 developed by GRS for two postulated accidents in a nuclear power plant with KONVOI type a pressurized water reactor and compared to calculations with MELCOR 1.8.6 YU. Major objective was to assess the capability of ASTEC for application in level 2 probabilistic safety analyses (PSA). In particular, it was investigated to which extent ASTEC is able to perform such integral calculations meeting criteria with regard to both reasonable calculation time and specific boundary conditions necessary for PSA analyses. Two exemplary accidents were selected: - A transient with loss of steam generator feed water, - A small break loss of coolant accident (50 cm 2 ) in the cold leg of the coolant line connected to the pressurizer. In principle, the results demonstrate the capability of ASTEC V1.33 to carry out such PSA level 2 calculations. In addition, it has to be noted that for both ASTEC and MELCOR the requirements in view of the quality of the results leads to prolonged calculation times due to more detailed nodalisations of the whole plant. This is valid for the core region as well as for the primary circuit and for the containment. Consequently, calculation times in the order of one day to two weeks are accomplished, thereby excluding extensive parameter analyses in order to assess the sensitivity of the calculation results. Concerning the quality of the results a good agreement can be stated between ASTEC and MELCOR results in terms of global data. In detail some results are sensitive to user effects. Here, the nodalisation seems to be of major influence besides differences in modeling specific phenomena. The comparison suggests that in particular the influence of the nodalisation defined by the user and depending on the user's experience should be carefully evaluated. Since some

  14. Comparative analysis of a LOCA for a German PWR with ASTEC and ATHLET-CD

    International Nuclear Information System (INIS)

    Reinke, N.; Chan, H.W.; Sonnenkalb, M.

    2013-01-01

    This paper presents the results of a comparative analysis performed with ASTEC V2.02 and a coupled ATHLET-CD V2.2c /COCOSYS V2.4 calculation for a German 1300 MWe KONVOI type PWR. The purpose of this analysis is mainly to assess the ASTEC code behaviour in modelling of both the thermal-hydraulic phenomena in the coolant circuit arising during a hypothetical severe accident and the early phase of the core degradation versus the more mechanistic code system ATHLET-CD/COCOSYS. The performed analyses cover a loss of coolant accident sequence (LOCA). Such comparison has been done for the first time. The integral code ASTEC (Accident Source Term Evaluation Code) commonly developed since 1996 by IRSN and GRS is a fast running programme, which allows the calculation of entire sequences of severe accidents (SA) in light water reactors from the initiating event up to the release of fission products into the environment, thereby covering all important in-vessel and containment phenomena. The thermal-hydraulic mechanistic system code ATHLET (Analysis of THermal-hydraulics of LEaks and Transients) is being developed by GRS for the analysis of the whole spectrum of leaks and transients in PWRs and BWRs. For modeling of core degradation processes the CD part (Core Degradation) of ATHLET can be activated. For analyses of the containment behavior, ATHLET-CD has been coupled to the GRS code COCOSYS (COntainment COde SYStem). (orig.)

  15. Fission-product release modelling in the ASTEC integral code: the status of the ELSA module

    International Nuclear Information System (INIS)

    Plumecocq, W.; Kissane, M.P.; Manenc, H.; Giordano, P.

    2003-01-01

    Safety assessment of water-cooled nuclear reactors encompasses potential severe accidents where, in particular, the release of fission products (FPs) and actinides into the reactor coolant system (RCS) is evaluated. The ELSA module is used in the ASTEC integral code to model all releases into the RCS. A wide variety of experiments is used for validation: small-scale CRL, ORNL and VERCORS tests; large-scale Phebus-FP tests; etc. Being a tool that covers intact fuel and degraded states, ELSA is being improved maximizing the use of information from degradation modelling. Short-term improvements will include some treatment of initial FP release due to intergranular inventories and implementing models for release of additional structural materials (Sn, Fe, etc.). (author)

  16. Application of ASTEC V2.0 to severe accident analyses for German KONVOI type reactors

    International Nuclear Information System (INIS)

    Nowack, H.; Erdmann, W.; Reinke, N.

    2011-01-01

    The integral code ASTEC is jointly developed by IRSN (Institut de Radioprotection et de Surete Nucleaire, France) and GRS (Germany). Its main objective is to simulate severe accident scenarios in PWRs from the initiating event up to the release of radioactive material into the environment. This paper describes the ASTEC modeling approach and the nodalisation of a KONVOI type PWR as an application example. Results from an integral severe accident study are presented and shortcomings as well as advantages are outlined. As a conclusion, the applicability of ASTEC V2.0 for deterministic severe accident analyses used for PSA level 2 and Severe Accident Management studies will be assessed. (author)

  17. Simulation of the containment spray system test PACOS PX2.2 with the integral code ASTEC and the containment code system COCOSYS

    International Nuclear Information System (INIS)

    Risken, Tobias; Koch, Marco K.

    2011-01-01

    The reactor safety research contains the analysis of postulated accidents in nuclear power plants (npp). These accidents may involve a loss of coolant from the nuclear plant's reactor coolant system, during which heat and pressure within the containment are increased. To handle these atmospheric conditions, containment spray systems are installed in various light water reactors (LWR) worldwide as a part of the accident management system. For the improvement and the safety ensurance in npp operation and accident management, numeric simulations of postulated accident scenarios are performed. The presented calculations regard the predictability of the containment spray system's effect with the integral code ASTEC and the containment code system COCOSYS, performed at Ruhr-Universitaet Bochum. Therefore the test PACOS Px2.2 is simulated, in which water is sprayed in the stratified containment atmosphere of the BMC (Battelle Modell-Containment). (orig.)

  18. Analysis and evaluation of the ASTEC model basis. Relevant experiments. Technical report

    International Nuclear Information System (INIS)

    Koppers, V.; Koch, M.K.

    2015-12-01

    The present report is a Technical Report within the research project ''ASMO'', funded by the German Federal Ministry of Economics and Technology (BMWi 1501433) and projected at the Reactor Simulation and Safety Group, Chair of Energy Systems and Energy Economics (LEE) at the Ruhr-Universitaet Bochum (RUB). The project deals with the analysis of the model basis of the Accident Source Term Evaluation Code (ASTEC). This report focuses on the containment part of ASTEC (CPA) and presents the simulation results of the experiment TH20.7. The experimental series TH20 was performed in the test vessel THAI (Thermal-hydraulics, Aerosols, Iodine) to investigate the erosion of a helium layer by a blower generated air jet. Helium is used as a substitute for hydrogen. In the experiment TH20.7 a light-gas layer is established and eroded by a momentum driven jet. The simulation of momentum driven jets is challenging for CPA because there is no model to simulate the kinetic momentum transfer. Subject of this report is the analysis of the capability of the code with the current model basis to model momentum driven phenomena. The jet is modelled using virtual ventilation systems, so called FAN-Systems. The FAN-Systems are adapted to the erosion velocity. The simulation results are compared to the experimental results and a basic calculation using FAN-Systems without any adjustments. For further improvement, different variation calculations are performed. At first, the vertical nodalization is refined. Subsequently, the resistance coefficients are adjusted to support the jet flow pattern and the number of the FAN-Systems is reduced. The analysis shows that the simulation of a momentum driven light-gas layer erosion is possible using adjusted FAN-Systems. A fine selected vertical nodalization and adaption of the resistance coefficients improves the simulation results.

  19. Analysis of the THAI Iod-11 and Iod-12 tests: Advancements and limitations of ASTEC V2.0R3p1 and MELCOR V2.1.4803

    International Nuclear Information System (INIS)

    Gonfiotti, Bruno; Paci, Sandro

    2015-01-01

    Highlights: • The I 2 transport in a multi-compartment vessel was analysed. • ASTEC and MELCOR codes were employed. • Same nodalisation for the code-to-code comparison. • The I 2 concentrations were quite well simulated in ASTEC. • Numerical issues on MELCOR. - Abstract: This work is related to the application of the ASTEC V2.0R3p1 and MELCOR V2.1.4803 codes to the analysis of the THAI Iod-11 and Iod-12 containment tests characterised by an iodine release. The main scope of these two tests was to investigate the steel interaction on dry and wet surfaces, with an interaction supposed to be a two-steps process: an initial faster and reversible physisorption followed by a slower, and irreversible, chemisorption of the physisorbed I 2 . The aim of the present work is to highlight advancements and limitations of the current ASTEC and MELCOR code versions respect to the older code versions employed during the European SARNET projects. The investigation was carried out as a code-to-code comparison vs. the experimental THAI data, focusing on the evaluation of the code models treating the iodine behaviour. A similar spatial nodalisation was employed for both codes. As main result, ASTEC had shown an overall good agreement compared to the iodine related experimental data while, on contrary, MELCOR had shown poor results, probably due to unsolved numerical issues and unsatisfactory iodine modellisation

  20. Implementation and testing of the CFDS-FLOW3D code

    International Nuclear Information System (INIS)

    Smith, B.L.

    1994-03-01

    FLOW3D is a multi-purpose, transient fluid dynamics and heat transfer code developed by Computational Fluid Dynamics Services (CFDS), a branch of AEA Technology, based at Harwell. The code is supplied with a SUN-based operating environment consisting of an interactive grid generator SOPHIA and a post-processor JASPER for graphical display of results. Both SOPHIA and JASPER are extensions of the support software originally written for the ASTEC code, also promoted by CFDS. The latest release of FLOW3D contains well-tested turbulence and combustion models and, in a less-developed form, a multi-phase modelling potential. This document describes briefly the modelling capabilities of FLOW3D (Release 3.2) and outlines implementation procedures for the VAX, CRAY and CONVEX computer systems. Additional remarks are made concerning the in-house support programs which have been specially written in order to adapt existing ASTEC input data for use with FLOW3D; these programs operate within a VAX-VMS environment. Three sample calculations have been performed and results compared with those obtained previously using the ASTEC code, and checked against other available data, where appropriate. (author) 35 figs., 3 tabs., 42 refs

  1. Stand-alone containment analysis of Phébus FPT tests with ASTEC and MELCOR codes: the FPT-2 test.

    Science.gov (United States)

    Gonfiotti, Bruno; Paci, Sandro

    2018-03-01

    During the last 40 years, many studies have been carried out to investigate the different phenomena occurring during a Severe Accident (SA) in a Nuclear Power Plant (NPP). Such efforts have been supported by the execution of different experimental campaigns, and the integral Phébus FP tests were probably some of the most important experiments in this field. In these tests, the degradation of a Pressurized Water Reactor (PWR) fuel bundle was investigated employing different control rod materials and burn-up levels in strongly or weakly oxidizing conditions. From the findings on these and previous tests, numerical codes such as ASTEC and MELCOR have been developed to analyze the evolution of a SA in real NPPs. After the termination of the Phébus FP campaign, these two codes have been furthermore improved to implement the more recent findings coming from different experimental campaigns. Therefore, continuous verification and validation is still necessary to check that the new improvements introduced in such codes allow also a better prediction of these Phébus tests. The aim of the present work is to re-analyze the Phébus FPT-2 test employing the updated ASTEC and MELCOR code versions. The analysis focuses on the stand-alone containment aspects of this test, and three different spatial nodalizations of the containment vessel (CV) have been developed. The paper summarizes the main thermal-hydraulic results and presents different sensitivity analyses carried out on the aerosols and fission products (FP) behavior. When possible, a comparison among the results obtained during this work and by different authors in previous work is also performed. This paper is part of a series of publications covering the four Phébus FP tests using a PWR fuel bundle: FPT-0, FPT-1, FPT-2, and FPT-3, excluding the FPT-4 one, related to the study of the release of low-volatility FP and transuranic elements from a debris bed and a pool of melted fuel.

  2. Stand-alone containment analysis of Phébus FPT tests with ASTEC and MELCOR codes: the FPT-2 test

    Directory of Open Access Journals (Sweden)

    Bruno Gonfiotti

    2018-03-01

    Full Text Available During the last 40 years, many studies have been carried out to investigate the different phenomena occurring during a Severe Accident (SA in a Nuclear Power Plant (NPP. Such efforts have been supported by the execution of different experimental campaigns, and the integral Phébus FP tests were probably some of the most important experiments in this field. In these tests, the degradation of a Pressurized Water Reactor (PWR fuel bundle was investigated employing different control rod materials and burn-up levels in strongly or weakly oxidizing conditions. From the findings on these and previous tests, numerical codes such as ASTEC and MELCOR have been developed to analyze the evolution of a SA in real NPPs. After the termination of the Phébus FP campaign, these two codes have been furthermore improved to implement the more recent findings coming from different experimental campaigns. Therefore, continuous verification and validation is still necessary to check that the new improvements introduced in such codes allow also a better prediction of these Phébus tests. The aim of the present work is to re-analyze the Phébus FPT-2 test employing the updated ASTEC and MELCOR code versions. The analysis focuses on the stand-alone containment aspects of this test, and three different spatial nodalizations of the containment vessel (CV have been developed. The paper summarizes the main thermal-hydraulic results and presents different sensitivity analyses carried out on the aerosols and fission products (FP behavior. When possible, a comparison among the results obtained during this work and by different authors in previous work is also performed. This paper is part of a series of publications covering the four Phébus FP tests using a PWR fuel bundle: FPT-0, FPT-1, FPT-2, and FPT-3, excluding the FPT-4 one, related to the study of the release of low-volatility FP and transuranic elements from a debris bed and a pool of melted fuel. Keywords: Safety

  3. Assessment on 900–1300 MWe PWRs of the ASTEC-based simulation tool of SGTR thermal-hydraulics for the IRSN Emergency Technical Centre

    Energy Technology Data Exchange (ETDEWEB)

    Foucher, L., E-mail: laurent.foucher@irsn.fr [Institut de Radioprotection et de Sûreté Nucléaire (IRSN), PSN-RES/SAG, Cadarache, Saint-Paul-lez-Durance 13115 (France); Cousin, F.; Fleurot, J. [Institut de Radioprotection et de Sûreté Nucléaire (IRSN), PSN-RES/SAG, Cadarache, Saint-Paul-lez-Durance 13115 (France); Brethes, S. [Institut de Radioprotection et de Sûreté Nucléaire (IRSN), PRP-CRI/SESUC, Cadarache, Saint-Paul-lez-Durance 13115 (France)

    2014-06-01

    In the event of an accident occurring in a nuclear power plant (NPP), being able to predict the amount of released radioactive substances in the environment is of prime importance. Depending on the severity of the accident, it can be necessary to quickly and efficiently protect the population and the surrounding environment from the associated radiological consequences. In France, the IRSN Emergency Technical Centre provides a technical support in decision making in case of a nuclear accident. The main objectives are to evaluate and predict the plant behaviour and radioactive releases during the accident. Different types of complementary tools are used: expert assessments, pre-calculated databases, simulation tools, etc. In the case of Steam Generator Tube Rupture (SGTR) accidents that may lead to significant radioactive releases to the atmosphere through the steam generator relief valves, IRSN is currently improving the simulation tools for diagnosis in crisis management. The objective is to adapt the thermal-hydraulic and FP behaviour modules of the severe accident integral code ASTEC V2.0, jointly developed by IRSN and its German counterpart GRS, to crisis management requirements. These requirements impose a fast running, highly reliable (accurate physical results), flexible and simple tool. This paper summarizes the results of the benchmarks between the ASTEC V2.0 thermal-hydraulic module and the CATHARE 2 (V2.5) French reference thermal-hydraulics code on several SGTR scenarios both for PWR 900 and 1300 MWe, with a particular emphasis on the computational time and physical models assessment. The overall agreement between both codes is good on the primary and secondary circuit thermal-hydraulic parameters. Moreover, the reliability and fast computational time of the thermal-hydraulic module of ASTEC V2.0 code appeared very satisfactory and in accordance with the requirements of an emergency tool.

  4. Assessment on 900–1300 MWe PWRs of the ASTEC-based simulation tool of SGTR thermal-hydraulics for the IRSN Emergency Technical Centre

    International Nuclear Information System (INIS)

    Foucher, L.; Cousin, F.; Fleurot, J.; Brethes, S.

    2014-01-01

    In the event of an accident occurring in a nuclear power plant (NPP), being able to predict the amount of released radioactive substances in the environment is of prime importance. Depending on the severity of the accident, it can be necessary to quickly and efficiently protect the population and the surrounding environment from the associated radiological consequences. In France, the IRSN Emergency Technical Centre provides a technical support in decision making in case of a nuclear accident. The main objectives are to evaluate and predict the plant behaviour and radioactive releases during the accident. Different types of complementary tools are used: expert assessments, pre-calculated databases, simulation tools, etc. In the case of Steam Generator Tube Rupture (SGTR) accidents that may lead to significant radioactive releases to the atmosphere through the steam generator relief valves, IRSN is currently improving the simulation tools for diagnosis in crisis management. The objective is to adapt the thermal-hydraulic and FP behaviour modules of the severe accident integral code ASTEC V2.0, jointly developed by IRSN and its German counterpart GRS, to crisis management requirements. These requirements impose a fast running, highly reliable (accurate physical results), flexible and simple tool. This paper summarizes the results of the benchmarks between the ASTEC V2.0 thermal-hydraulic module and the CATHARE 2 (V2.5) French reference thermal-hydraulics code on several SGTR scenarios both for PWR 900 and 1300 MWe, with a particular emphasis on the computational time and physical models assessment. The overall agreement between both codes is good on the primary and secondary circuit thermal-hydraulic parameters. Moreover, the reliability and fast computational time of the thermal-hydraulic module of ASTEC V2.0 code appeared very satisfactory and in accordance with the requirements of an emergency tool

  5. ASTEC: Controls analysis for personal computers

    Science.gov (United States)

    Downing, John P.; Bauer, Frank H.; Thorpe, Christopher J.

    1989-01-01

    The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. The project is a follow-on to the INCA (INteractive Controls Analysis) program that has been developed at GSFC over the past five years. While ASTEC makes use of the algorithms and expertise developed for the INCA program, the user interface was redesigned to take advantage of the capabilities of the personal computer. The design philosophy and the current capabilities of the ASTEC software are described.

  6. ASTEC V1.2.1 analysis of fission product transport in the primary system of a VVER-1000 type NPP during a severe accident

    International Nuclear Information System (INIS)

    Dienstbier, J.

    2006-06-01

    The SOPHAEROS module of the ASTEC V1.2.1 code was used. The results are compared to those obtained by using the MELCOR 1.8.5 code. One case was also calculated where instead of being provided by other ASTEC modules, the input data for the SOPHAEROS module are taken over from the MELCOR results. Marked differences were observed between the results of the two codes, which can be only partially explained in terms of the different assumptions made in them. The deposition profiles along the primary piping, however, are similar in the two codes

  7. Validation of ASTEC V2 models for the behaviour of corium in the vessel lower head

    International Nuclear Information System (INIS)

    Carénini, L.; Fleurot, J.; Fichot, F.

    2014-01-01

    The paper is devoted to the presentation of validation cases carried out for the models describing the corium behaviour in the “lower plenum” of the reactor vessel implemented in the V2.0 version of the ASTEC integral code, jointly developed by IRSN (France) and GRS (Germany). In the ASTEC architecture, these models are grouped within the single ICARE module and they are all activated in typical accident scenarios. Therefore, it is important to check the validity of each individual model, as long as experiments are available for which a single physical process is involved. The results of ASTEC applications against the following experiments are presented: FARO (corium jet fragmentation), LIVE (heat transfer between a molten pool and the vessel), MASCA (separation and stratification of corium non miscible phases) and OLHF (mechanical failure of the vessel). Compared to the previous ASTEC V1.3 version, the validation matrix is extended. This work allows determining recommended values for some model parameters (e.g. debris particle size in the fragmentation model and criterion for debris bed liquefaction). Almost all the processes governing the corium behaviour, its thermal interaction with the vessel wall and the vessel failure are modelled in ASTEC and these models have been assessed individually with satisfactory results. The main uncertainties appear to be related to the calculation of transient evolutions

  8. Simulation of KAEVER experiments on aerosol behavior in a nuclear power plant containment at accident conditions with the ASTEC code

    International Nuclear Information System (INIS)

    Kljenak, I.; Mavko, B.

    2006-01-01

    Experiments on aerosol behaviour in saturated and non-saturated atmosphere, which were performed in the KAEVER experimental facility, were simulated with the severe accident computer code ASTEC CPA V1.2. The specific purpose of the work was to assess the capability of the code to model aerosol condensation and deposition in the containment of a light-water-reactor nuclear power plant at severe accident conditions, if the atmosphere saturation conditions are simulated adequately. Five different tests were first simulated with boundary conditions, obtained from the experiments. In all five tests, a non-saturated atmosphere was simulated, although, in four tests, the atmosphere was allegedly saturated. The simulations were repeated with modified boundary conditions, to obtain a saturated atmosphere in all tests. Results of dry and wet aerosol concentrations in the test vessel atmosphere for both sets of simulations are compared to experimental results. (author)

  9. Validation of ASTEC v2.0 corium jet fragmentation model using FARO experiments

    International Nuclear Information System (INIS)

    Hermsmeyer, S.; Pla, P.; Sangiorgi, M.

    2015-01-01

    Highlights: • Model validation base extended to six FARO experiments. • Focus on the calculation of the fragmented particle diameter. • Capability and limits of the ASTEC fragmentation model. • Sensitivity analysis of model outputs. - Abstract: ASTEC is an integral code for the prediction of Severe Accidents in Nuclear Power Plants. As such, it needs to cover all physical processes that could occur during accident progression, yet keeping its models simple enough for the ensemble to stay manageable and produce results within an acceptable time. The present paper is concerned with the validation of the Corium jet fragmentation model of ASTEC v2.0 rev3 by means of a selection of six experiments carried out within the FARO facility. The different conditions applied within these six experiments help to analyse the model behaviour in different situations and to expose model limits. In addition to comparing model outputs with experimental measurements, sensitivity analyses are applied to investigate the model. Results of the paper are (i) validation runs, accompanied by an identification of situations where the implemented fragmentation model does not match the experiments well, and discussion of results; (ii) its special attention to the models calculating the diameter of fragmented particles, the identification of a fault in one model implemented, and the discussion of simplification and ad hoc modification to improve the model fit; and, (iii) an investigation of the sensitivity of predictions towards inputs and parameters. In this way, the paper offers a thorough investigation of the merit and limitation of the fragmentation model used in ASTEC

  10. Comparative analysis of the results obtained by computer code ASTEC V2 and RELAP 5.3.2 for small leak ID 80 for VVER 1000

    International Nuclear Information System (INIS)

    Atanasova, B.; Grudev, P.

    2011-01-01

    The purpose of this report is to present the results obtained by simulation and subsequent analysis of emergency mode for small leak with ID 80 for WWER 1000/B320 - Kozloduy NPP Units 5 and 6. Calculations were performed with the ASTEC v2 computer code used for calculation of severe accident, which was designed by French and German groups - IRSN and GRS. Integral RELAP5 computer code is used as a reference for comparison of results. The analyzes are focused on the processes occurring in reactor internals phase of emergency mode with significant core damage. The main thermohydraulic parameters, start of reactor core degradation and subsequent fuel relocalization till reactor vessel failure are evaluated in the analysis. RELAP5 computer code is used as a reference code to compare the results obtained till early core degradation that occurs after core stripping and excising of fuel temperature above 1200 0 C

  11. ASTEC and MODEL: Controls software development at Goddard Space Flight Center

    Science.gov (United States)

    Downing, John P.; Bauer, Frank H.; Surber, Jeffrey L.

    1993-01-01

    The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at the Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. In the last three years the ASTEC (Analysis and Simulation Tools for Engineering Controls) software has been under development. ASTEC is meant to be an integrated collection of controls analysis tools for use at the desktop level. MODEL (Multi-Optimal Differential Equation Language) is a translator that converts programs written in the MODEL language to FORTRAN. An upgraded version of the MODEL program will be merged into ASTEC. MODEL has not been modified since 1981 and has not kept with changes in computers or user interface techniques. This paper describes the changes made to MODEL in order to make it useful in the 90's and how it relates to ASTEC.

  12. A fast running method for predicting the efficiency of core melt spreading for application in ASTEC

    International Nuclear Information System (INIS)

    Spengler, C.

    2010-01-01

    The integral Accident Source Term Evaluation Code (ASTEC) is jointly developed by the French Institut de Radioprotection et de Surete Nucleaire (IRSN) and the German Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH to simulate the complete scenario of a hypothetical severe accident in a nuclear light water reactor, from the initial event until the possible radiological release of fission products out of the containment. In the frame of the new series of ASTEC V2 versions appropriate model extensions to the European Pressurised Water Reactor (EPR) are under development. With view to assessing with ASTEC the proper operation of the ex-vessel melt retention and coolability concept of the EPR with regard to melt spreading an approximation of the area finally covered by the corium and of the distance run by the corium front before freezing is required. A necessary capability of ASTEC is in a first step to identify such boundary cases, for which there is a potential that the melt will freeze before the spreading area is completely filled. This paper presents a fast running method for estimating the final extent of the area covered with melt on which a simplified criterion in ASTEC for detecting such boundary cases will be based. If a boundary case is detected the application of a more-detailed method might be necessary to assess further the consequences for the accident sequence. The major objective here is to provide a reliable method for estimating the final result of the spreading and not to provide highly detailed methods to simulate the dynamics of the transient process. (orig.)

  13. Air oxidation of Zircaloy-4 in the 600-1000 °C temperature range: Modeling for ASTEC code application

    Science.gov (United States)

    Coindreau, O.; Duriez, C.; Ederli, S.

    2010-10-01

    Progress in the treatment of air oxidation of zirconium in severe accident (SA) codes are required for a reliable analysis of severe accidents involving air ingress. Air oxidation of zirconium can actually lead to accelerated core degradation and increased fission product release, especially for the highly-radiotoxic ruthenium. This paper presents a model to simulate air oxidation kinetics of Zircaloy-4 in the 600-1000 °C temperature range. It is based on available experimental data, including separate-effect experiments performed at IRSN and at Forschungszentrum Karlsruhe. The kinetic transition, named "breakaway", from a diffusion-controlled regime to an accelerated oxidation is taken into account in the modeling via a critical mass gain parameter. The progressive propagation of the locally initiated breakaway is modeled by a linear increase in oxidation rate with time. Finally, when breakaway propagation is completed, the oxidation rate stabilizes and the kinetics is modeled by a linear law. This new modeling is integrated in the severe accident code ASTEC, jointly developed by IRSN and GRS. Model predictions and experimental data from thermogravimetric results show good agreement for different air flow rates and for slow temperature transient conditions.

  14. Assessment of ASTEC-CPA pool scrubbing models against POSEIDON-II and SGTR-ARTIST data

    International Nuclear Information System (INIS)

    Herranz, Luis E.; Fontanet, Joan

    2009-01-01

    Aerosol scrubbing in pools mitigates the potential source term in key severe accident scenarios in PWRs and BWRs. Even though models were extensively validated in the past, a thorough and systematic validation under key challenging conditions is missing. Some of those conditions are high injection velocity, high pool temperature and/or presence of submerged structures. In particular, in-code models have been neither updated nor validated based on the most recent experimental data. The POSEIDON-II and the SGTR-ARTIST projects produced sets of data under conditions of utmost interest for pool scrubbing validation: high temperature and submerged structures. This paper investigates the response of models encapsulated in the CPA module of the ASTEC code in the simulation of those experimental set-ups. The influence of key pool scrubbing variables like steam fraction, water depth, gas flow-rate and particle size has been analyzed. Additionally, comparisons to stand-alone code (i.e., SPARC90) responses have also been obtained, so that prediction-to-data deviations can be discussed and attributed to either model grounds and/or model implementation in integral accident codes. This work has demonstrated that ASTEC-CPA limitations to capture fundamental trends of aerosol pool scrubbing are substantial (although the SGTR scenarios should not be properly considered within the CPA scope) and they stem from both original models (i.e., SPARC90) and model implementation. This work has been carried out within the European SARNET project of the VI Framework Program of EURATOM. (author)

  15. The integral analysis of 40 mm diameter pipe rupture in cooling system of fusion facility W7-X with ASTEC code

    Energy Technology Data Exchange (ETDEWEB)

    Kačegavičius, Tomas, E-mail: Tomas.Kacegavicius@lei.lt; Povilaitis, Mantas, E-mail: Mantas.Povilaitis@lei.lt

    2015-12-15

    Highlights: • The analysis of loss-of-coolant accident (LOCA) in W7-X facility. • Burst disc is sufficient to prevent pressure inside the plasma vessel exceeding 110 kPa. • Developed model of the cooling system adequately represents the expected phenomena. - Abstract: Fusion is the energy production technology, which could potentially solve problems with growing energy demand of population in the future. Wendelstein 7-X (W7-X) is an experimental facility of stellarator type, which is currently being built at the Max-Planck-Institute for Plasmaphysics located in Greifswald, Germany. W7-X shall demonstrate that in future the energy could be produced in such type of fusion reactors. The safety analysis is required before the operation of the facility could be started. A rupture of 40 mm diameter pipe, which is connected to the divertor unit (module for plasma cooling) to ensure heat removal from the vacuum vessel in case of no-plasma operation mode “baking” is one of the design basis accidents to be investigated. During “baking” mode the vacuum vessel structures and working fluid – water are heated to the temperature 160 °C. This accident was selected for the detailed analysis using integral code ASTEC, which is developed by IRSN (France) and GRS mbH (Germany). This paper presents the integral analysis of W7-X response to a selected accident scenario. The model of the main cooling circuit and “baking” circuit was developed for ASTEC code. There were analysed two cases: (1) rupture of a pipe connected to the upper divertor unit and (2) rupture of a pipe connected to the lower divertor unit. The results of analysis showed that in both cases the water is almost completely released from the units into the plasma vessel. In both cases the pressure in the plasma vessel rapidly increases and in 28 s the set point for burst disc opening is reached preventing further pressurisation.

  16. PROGRAM ASTEC (ADVANCED SOLAR TURBO ELECTRIC CONCEPT). PART 1. CANDIDATE MATERIALS LABORATORY TESTS

    Science.gov (United States)

    A space power system of the type envisioned by the ASTEC program requires the development of a lightweight solar collector of high reflectance...capable of withstanding the space environment for an extended period. A survey of the environment of interest for ASTEC purposes revealed 4 potential...developed by the solar-collector industry for use in the ASTEC program, and to test the effects of space environment on these materials. Of 6 material

  17. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  18. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Burr Alister

    2009-01-01

    Full Text Available Abstract This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are and . The performances of both systems with high ( and low ( BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  19. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Lei Ye

    2009-01-01

    Full Text Available This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are 1/2 and 1/3. The performances of both systems with high (10−2 and low (10−4 BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  20. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  1. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  2. Comparison of plateletpheresis on the Fresenius AS.TEC 204 and Haemonetics MCS 3p.

    Science.gov (United States)

    Ranganathan, Sudha

    2007-02-01

    This is an attempt at comparing two cell separators for plateletpheresis, namely the Fresenius AS.TEC 204 and Haemonetics MCS 3p, at a tertiary care center in India. Donors who weighed between 55-75 kg, who had a hematocrit of 41-43%, and platelet counts of 250x10(3)-400x10(3)/microl were selected for the study. The comparability of the donors who donated on the two cell separators were analysed by t-test independent samples and no significant differences were found (P>0.05). The features compared were time taken for the procedure, volume processed on the separators, adverse reactions of the donors, quality control of the product, separation efficiency of the separators, platelet loss in the donors after the procedure, and the predictor versus the actual yield of platelets given by the cell separator. The volume processed to get a target yield of >3x10(11) was equal to 2.8-3.2 l and equal in both the cell separators. Symptoms of citrate toxicity were seen in 4 and 2.5% of donors who donated on the MCS 3p and the AS.TEC 204, respectively, and 3 and 1% of donors, respectively, had vasovagal reactions. All the platelet products collected had a platelet count of >3x10(11); 90% of the platelet products collected on the AS.TEC 204 attained the predicted yield that was set on the cell separator where as 75% of the platelet products collected on the MCS 3p attained the target yield. Quality control of the platelets collected on both the cell separators complied with the standards except that 3% of the platelets collected on the MCS 3p had a visible red cell contamination. The separation efficiency of the MCS 3p was higher, 50-52% as compared to the 40-45% on the AS.TEC 204. A provision of double venous access, less adverse reactions, negligible RBC contamination with a better predictor yield of platelets makes the AS.TEC 204 a safer and more reliable alternative than the widely used Haemonetics MCS 3p. Copyright (c) 2006 Wiley-Liss, Inc.

  3. Adaptive RAC codes employing statistical channel evaluation ...

    African Journals Online (AJOL)

    An adaptive encoding technique using row and column array (RAC) codes employing a different number of parity columns that depends on the channel state is proposed in this paper. The trellises of the proposed adaptive codes and a statistical channel evaluation technique employing these trellises are designed and ...

  4. Application of MELCOR Code to a French PWR 900 MWe Severe Accident Sequence and Evaluation of Models Performance Focusing on In-Vessel Thermal Hydraulic Results

    International Nuclear Information System (INIS)

    De Rosa, Felice

    2006-01-01

    In the ambit of the Severe Accident Network of Excellence Project (SARNET), funded by the European Union, 6. FISA (Fission Safety) Programme, one of the main tasks is the development and validation of the European Accident Source Term Evaluation Code (ASTEC Code). One of the reference codes used to compare ASTEC results, coming from experimental and Reactor Plant applications, is MELCOR. ENEA is a SARNET member and also an ASTEC and MELCOR user. During the first 18 months of this project, we performed a series of MELCOR and ASTEC calculations referring to a French PWR 900 MWe and to the accident sequence of 'Loss of Steam Generator (SG) Feedwater' (known as H2 sequence in the French classification). H2 is an accident sequence substantially equivalent to a Station Blackout scenario, like a TMLB accident, with the only difference that in H2 sequence the scram is forced to occur with a delay of 28 seconds. The main events during the accident sequence are a loss of normal and auxiliary SG feedwater (0 s), followed by a scram when the water level in SG is equal or less than 0.7 m (after 28 seconds). There is also a main coolant pumps trip when ΔTsat < 10 deg. C, a total opening of the three relief valves when Tric (core maximal outlet temperature) is above 603 K (330 deg. C) and accumulators isolation when primary pressure goes below 1.5 MPa (15 bar). Among many other points, it is worth noting that this was the first time that a MELCOR 1.8.5 input deck was available for a French PWR 900. The main ENEA effort in this period was devoted to prepare the MELCOR input deck using the code version v.1.8.5 (build QZ Oct 2000 with the latest patch 185003 Oct 2001). The input deck, completely new, was prepared taking into account structure, data and same conditions as those found inside ASTEC input decks. The main goal of the work presented in this paper is to put in evidence where and when MELCOR provides good enough results and why, in some cases mainly referring to its

  5. Opportunistic Adaptive Transmission for Network Coding Using Nonbinary LDPC Codes

    Directory of Open Access Journals (Sweden)

    Cocco Giuseppe

    2010-01-01

    Full Text Available Network coding allows to exploit spatial diversity naturally present in mobile wireless networks and can be seen as an example of cooperative communication at the link layer and above. Such promising technique needs to rely on a suitable physical layer in order to achieve its best performance. In this paper, we present an opportunistic packet scheduling method based on physical layer considerations. We extend channel adaptation proposed for the broadcast phase of asymmetric two-way bidirectional relaying to a generic number of sinks and apply it to a network context. The method consists of adapting the information rate for each receiving node according to its channel status and independently of the other nodes. In this way, a higher network throughput can be achieved at the expense of a slightly higher complexity at the transmitter. This configuration allows to perform rate adaptation while fully preserving the benefits of channel and network coding. We carry out an information theoretical analysis of such approach and of that typically used in network coding. Numerical results based on nonbinary LDPC codes confirm the effectiveness of our approach with respect to previously proposed opportunistic scheduling techniques.

  6. Adaptive format conversion for scalable video coding

    Science.gov (United States)

    Wan, Wade K.; Lim, Jae S.

    2001-12-01

    The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.

  7. Facial expression coding in children and adolescents with autism: Reduced adaptability but intact norm-based coding.

    Science.gov (United States)

    Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise

    2018-05-01

    Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.

  8. Comparison of different methods used in integral codes to model coagulation of aerosols

    Science.gov (United States)

    Beketov, A. I.; Sorokin, A. A.; Alipchenkov, V. M.; Mosunova, N. A.

    2013-09-01

    The methods for calculating coagulation of particles in the carrying phase that are used in the integral codes SOCRAT, ASTEC, and MELCOR, as well as the Hounslow and Jacobson methods used to model aerosol processes in the chemical industry and in atmospheric investigations are compared on test problems and against experimental results in terms of their effectiveness and accuracy. It is shown that all methods are characterized by a significant error in modeling the distribution function for micrometer particles if calculations are performed using rather "coarse" spectra of particle sizes, namely, when the ratio of the volumes of particles from neighboring fractions is equal to or greater than two. With reference to the problems considered, the Hounslow method and the method applied in the aerosol module used in the ASTEC code are the most efficient ones for carrying out calculations.

  9. Adaptable recursive binary entropy coding technique

    Science.gov (United States)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2002-07-01

    We present a novel data compression technique, called recursive interleaved entropy coding, that is based on recursive interleaving of variable-to variable length binary source codes. A compression module implementing this technique has the same functionality as arithmetic coding and can be used as the engine in various data compression algorithms. The encoder compresses a bit sequence by recursively encoding groups of bits that have similar estimated statistics, ordering the output in a way that is suited to the decoder. As a result, the decoder has low complexity. The encoding process for our technique is adaptable in that each bit to be encoded has an associated probability-of-zero estimate that may depend on previously encoded bits; this adaptability allows more effective compression. Recursive interleaved entropy coding may have advantages over arithmetic coding, including most notably the admission of a simple and fast decoder. Much variation is possible in the choice of component codes and in the interleaving structure, yielding coder designs of varying complexity and compression efficiency; coder designs that achieve arbitrarily small redundancy can be produced. We discuss coder design and performance estimation methods. We present practical encoding and decoding algorithms, as well as measured performance results.

  10. Comparison of ASTECV1.3.2 and ASTECV2 results for QUENCH 12 test

    International Nuclear Information System (INIS)

    Stefanova, A.

    2010-01-01

    This paper presents a comparison of QUENCH 12 test calculated results with ASTECv1.3R2 and ASTECv2 computer codes. The test was performed to investigate the behavior of VVER fuel assemblies. This investigation is a part of the 6th and 7th framework programs of the EC supported ISTC program. The test facility is located at Forschungszentrum in Karlsruhe. The structure of the test facility allows experimental studies under transient and accident conditions. The ASTEC1.3R2 and ASTECv2 computer codes have been used to simulate the investigated test. The base line input model for ASTEC was provided from Forschungszentrum, Karlsruhe. During the preparation of QUENCH - 12 experiment, the input deck was adapted to new initial and boundary conditions. The comparison show good agreement between measured data and ASTEC calculated results. (author)

  11. Analysis and evaluation of the ASTEC model basis on plant simulations. 2. Technical report

    International Nuclear Information System (INIS)

    Koppers, Vera; Braehler, Thimo; Koch, Marco K.

    2015-06-01

    The present report is the 2 nd Technical Report of the research project ''Analysis and Evaluation of the ASTEC model basis'' funded by the Federal Ministry for Economic Affairs and Energy (BMWi 1501433) and conducted at the Reactor Simulation and Safety Group, Chair of Energy Systems and Energy Economics (LEE) at Ruhr-Universitaet Bochum. Within this report, the quality of a nuclear power plant simulation with ASTEC is investigated. Different parameters are varied to analyze the simulation stability within the dataset, which describes a generic German Konvoi power plant and is deposit in the program set of ASTEC. Firstly, plant specifications in the data set are checked for plausibility. In addition, the compliance of the data set with the nodalization rules is verified. After that, the stationary phase, in which no accident is calculated, is analyzed and parametric studies are performed in the transient phase, focusing on the primary and secondary circuit as well as on the containment behavior. The performed calculations focusing the primary and secondary circuit indicate a high dependency of the simulation results on the user's input in the data set. There are significant deviations between each simulations results, for example in the different calculated point of time of the reactor pressure vessel failure. Already changes in the stationary phase cause a significantly earlier reactor pressure vessel failure compared to the simulation with the original data. Beyond that, the location of the leakage of the reactor pressure vessel lower head varied and therefore cannot be clearly determined, although there have been no changes by the user on the accident course. A reliable indication of the plant behavior under severe accident conditions is therefore difficult using ASTEC. The results of the parametric studies within the Containment show the same significant influence of certain parameter changes on the simulation results. By using the

  12. SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE

    Science.gov (United States)

    Davies, C. B.

    1994-01-01

    SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is

  13. Adaptive decoding of convolutional codes

    Science.gov (United States)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  14. Adaptive decoding of convolutional codes

    Directory of Open Access Journals (Sweden)

    K. Hueske

    2007-06-01

    Full Text Available Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  15. Context quantization by minimum adaptive code length

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Wu, Xiaolin

    2007-01-01

    Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols....

  16. Further analysis of the FRONT model in ASTEC by simulating the hydrogen deflagration experiment BMC Ix9

    International Nuclear Information System (INIS)

    Braehler, Thimo; Koch, Marco K.

    2011-01-01

    Effects of possible hydrogen deflagration like pressure built up and temperature increase can become important for the evaluation of late phases in loss of coolant accidents. In this compact the simulation of the hydrogen deflagration test BMC Ix9 with the FRONT model of the integral lumped-parameter-code ASTEC is treated. This model is available since mid of 2009, released with ASTEC V2.0. To check the validity of the model related to the applicability on different phenomena, a large number of simulations are necessary. The model was used by RUB in the frame of the 'International Standard Problem on Hydrogen Combustion (ISP-49)' and within the EC NoE SARNET2. It has been concluded that the model is able to simulate a broad range of hydrogen deflagration phenomena under different experimental conditions. Experiments analysed in the mentioned benchmarks are characterised by flame propagation in vertical direction. Moreover there were no considerations of flame propagation in multi compartment geometries. In the BMC Ix9 test horizontal hydrogen deflagration with flame propagation in 3 rooms was investigated. The FRONT model was already validated on the BMC Hx23 experiment with sufficient results. In comparison to this test the number of compartments and the initial gas composition, like hydrogen and steam concentration differs from the BMC Ix9 experiment. Previous investigations of RUB showed that the modelling of turbulence related to the transport between different compartment and the determination of this quantity has a strong influence on the simulation results. In the following the FRONT model is described briefly, the simulation results are discussed and a first recommendation for the nodalisation is given. (orig.)

  17. Adaptive Modulation and Coding for LTE Wireless Communication

    Science.gov (United States)

    Hadi, S. S.; Tiong, T. C.

    2015-04-01

    Long Term Evolution (LTE) is the new upgrade path for carrier with both GSM/UMTS networks and CDMA2000 networks. The LTE is targeting to become the first global mobile phone standard regardless of the different LTE frequencies and bands use in other countries barrier. Adaptive Modulation and Coding (AMC) is used to increase the network capacity or downlink data rates. Various modulation types are discussed such as Quadrature Phase Shift Keying (QPSK), Quadrature Amplitude Modulation (QAM). Spatial multiplexing techniques for 4×4 MIMO antenna configuration is studied. With channel station information feedback from the mobile receiver to the base station transmitter, adaptive modulation and coding can be applied to adapt to the mobile wireless channels condition to increase spectral efficiencies without increasing bit error rate in noisy channels. In High-Speed Downlink Packet Access (HSDPA) in Universal Mobile Telecommunications System (UMTS), AMC can be used to choose modulation types and forward error correction (FEC) coding rate.

  18. Adaptive Space–Time Coding Using ARQ

    KAUST Repository

    Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2015-01-01

    We study the energy-limited outage probability of the block space-time coding (STC)-based systems utilizing automatic repeat request (ARQ) feedback and adaptive power allocation. Taking the ARQ feedback costs into account, we derive closed

  19. Evaporation over sump surface in containment studies: code validation on TOSQAN tests

    International Nuclear Information System (INIS)

    Malet, J.; Gelain, T.; Degrees du Lou, O.; Daru, V.

    2011-01-01

    During the course of a severe accident in a Nuclear Power Plant, water can be collected in the sump containment through steam condensation on walls and spray systems activation. The objective of this paper is to present code validation on evaporative sump tests performed on the TOSQAN facility. The ASTEC-CPA code is used as a lumped-parameter code and specific user-defined-functions are developed for the TONUS-CFD code. The tests are air-steam tests, as well as tests with other non-condensable gases (He, CO 2 and SF 6 ) under steady and transient conditions. The results show a good agreement between codes and experiments, indicating a good behaviour of the sump models in both codes. (author)

  20. A multiobjective approach to the genetic code adaptability problem.

    Science.gov (United States)

    de Oliveira, Lariza Laura; de Oliveira, Paulo S L; Tinós, Renato

    2015-02-19

    The organization of the canonical code has intrigued researches since it was first described. If we consider all codes mapping the 64 codes into 20 amino acids and one stop codon, there are more than 1.51×10(84) possible genetic codes. The main question related to the organization of the genetic code is why exactly the canonical code was selected among this huge number of possible genetic codes. Many researchers argue that the organization of the canonical code is a product of natural selection and that the code's robustness against mutations would support this hypothesis. In order to investigate the natural selection hypothesis, some researches employ optimization algorithms to identify regions of the genetic code space where best codes, according to a given evaluation function, can be found (engineering approach). The optimization process uses only one objective to evaluate the codes, generally based on the robustness for an amino acid property. Only one objective is also employed in the statistical approach for the comparison of the canonical code with random codes. We propose a multiobjective approach where two or more objectives are considered simultaneously to evaluate the genetic codes. In order to test our hypothesis that the multiobjective approach is useful for the analysis of the genetic code adaptability, we implemented a multiobjective optimization algorithm where two objectives are simultaneously optimized. Using as objectives the robustness against mutation with the amino acids properties polar requirement (objective 1) and robustness with respect to hydropathy index or molecular volume (objective 2), we found solutions closer to the canonical genetic code in terms of robustness, when compared with the results using only one objective reported by other authors. Using more objectives, more optimal solutions are obtained and, as a consequence, more information can be used to investigate the adaptability of the genetic code. The multiobjective approach

  1. Fresenius AS.TEC204 blood cell separator.

    Science.gov (United States)

    Sugai, Mikiya

    2003-02-01

    Fresenius AS.TEC204 is a third-generation blood cell separator that incorporates the continuous centrifugal separation method and automatic control of the cell separation process. Continuous centrifugation separates cell components according to their specific gravity, and different cell components are either harvested or eliminated as needed. The interface between the red blood cell and plasma is optically detected, and the Interface Control (IFC) cooperates with different pumps, monitors and detectors to harvest required components automatically. The system is composed of three major sections; the Front Panel Unit; the Pump Unit, and the Centrifuge Unit. This unit can be used for a wide variety of clinical applications including collection of platelets, peripheral blood stem cells, bone marrow stem cells, granulocytes, mononuclear cells, and exchange of plasma or red cells, and for plasma treatment.

  2. Model of nuclear reactor type VVER-1000/V-320 built by computer code ATHLET-CD

    International Nuclear Information System (INIS)

    Georgiev, Yoto; Filipov, Kalin; Velev, Vladimir

    2014-01-01

    A model of nuclear reactor type VVER-1000 V-320 developed for computer code ATHLET-CD2.1A is presented. Validation of the has been made, in the analysis of the station blackout scenario with LOCA on fourth cold leg is shown. As the calculation has been completed, the results are checked through comparison with the results from the computer codes ATHLET-2.1A, ASTEC-2.1 and RELAP5mod3.2

  3. Adaptation of HAMMER computer code to CYBER 170/750 computer

    International Nuclear Information System (INIS)

    Pinheiro, A.M.B.S.; Nair, R.P.K.

    1982-01-01

    The adaptation of HAMMER computer code to CYBER 170/750 computer is presented. The HAMMER code calculates cell parameters by multigroup transport theory and reactor parameters by few group diffusion theory. The auxiliary programs, the carried out modifications and the use of HAMMER system adapted to CYBER 170/750 computer are described. (M.C.K.) [pt

  4. Adaptive variable-length coding for efficient compression of spacecraft television data.

    Science.gov (United States)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  5. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication

    Science.gov (United States)

    Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao

    2018-02-01

    A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.

  6. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    Science.gov (United States)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  7. Nyx: Adaptive mesh, massively-parallel, cosmological simulation code

    Science.gov (United States)

    Almgren, Ann; Beckner, Vince; Friesen, Brian; Lukic, Zarija; Zhang, Weiqun

    2017-12-01

    Nyx code solves equations of compressible hydrodynamics on an adaptive grid hierarchy coupled with an N-body treatment of dark matter. The gas dynamics in Nyx use a finite volume methodology on an adaptive set of 3-D Eulerian grids; dark matter is represented as discrete particles moving under the influence of gravity. Particles are evolved via a particle-mesh method, using Cloud-in-Cell deposition/interpolation scheme. Both baryonic and dark matter contribute to the gravitational field. In addition, Nyx includes physics for accurately modeling the intergalactic medium; in optically thin limits and assuming ionization equilibrium, the code calculates heating and cooling processes of the primordial-composition gas in an ionizing ultraviolet background radiation field.

  8. Adaptive Wavelet Coding Applied in a Wireless Control System.

    Science.gov (United States)

    Gama, Felipe O S; Silveira, Luiz F Q; Salazar, Andrés O

    2017-12-13

    Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER) versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  9. Adaptive Wavelet Coding Applied in a Wireless Control System

    Directory of Open Access Journals (Sweden)

    Felipe O. S. Gama

    2017-12-01

    Full Text Available Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  10. Water evaporation over sump surface in nuclear containment studies: CFD and LP codes validation on TOSQAN tests

    Energy Technology Data Exchange (ETDEWEB)

    Malet, J., E-mail: jeanne.malet@irsn.fr [Institut de Radioprotection et de Sûreté Nucléaire (IRSN), PSN-RES/SCA BP 68, 91192 Gif-sur-Yvette (France); Degrees du Lou, O. [Institut de Radioprotection et de Sûreté Nucléaire (IRSN), PSN-RES/SCA BP 68, 91192 Gif-sur-Yvette (France); Arts et Métiers ParisTech, DynFluid Lab. EA92, 151, boulevard de l’Hôpital, 75013 Paris (France); Gelain, T. [Institut de Radioprotection et de Sûreté Nucléaire (IRSN), PSN-RES/SCA BP 68, 91192 Gif-sur-Yvette (France)

    2013-10-15

    Highlights: • Simulations of evaporative TOSQAN sump tests are performed. • These tests are under air–steam gas conditions with addition of He, CO{sub 2} and SF{sub 6}. • ASTEC-CPA LP and TONUS-CFD codes with UDF for sump model are used. • Validation of sump models of both codes show good results. • The code–experiment differences are attributed to turbulent gas mixing modeling. -- Abstract: During the course of a severe accident in a Nuclear Power Plant, water can be collected in the sump containment through steam condensation on walls and spray systems activation. The objective of this paper is to present code validation on evaporative sump tests performed on TOSQAN facility. The ASTEC-CPA code is used as a lumped-parameter code and specific user-defined-functions are developed for the TONUS-CFD code. The seven tests are air–steam tests, as well as tests with other non-condensable gases (He, CO{sub 2} and SF{sub 6}) under steady and transient conditions (two depressurization tests). The results show a good agreement between codes and experiments, indicating a good behavior of the sump models in both codes. The sump model developed as User-Defined Functions (UDF) for TONUS is considered as well validated and is ‘ready-to-use’ for all CFD codes in which such UDF can be added. The remaining discrepancies between codes and experiments are caused by turbulent transport and gas mixing, especially in the presence of non-condensable gases other than air, so that code validation on this important topic for hydrogen safety analysis is still recommended.

  11. Performance optimization of PM-16QAM transmission system enabled by real-time self-adaptive coding.

    Science.gov (United States)

    Qu, Zhen; Li, Yao; Mo, Weiyang; Yang, Mingwei; Zhu, Shengxiang; Kilper, Daniel C; Djordjevic, Ivan B

    2017-10-15

    We experimentally demonstrate self-adaptive coded 5×100  Gb/s WDM polarization multiplexed 16 quadrature amplitude modulation transmission over a 100 km fiber link, which is enabled by a real-time control plane. The real-time optical signal-to-noise ratio (OSNR) is measured using an optical performance monitoring device. The OSNR measurement is processed and fed back using control plane logic and messaging to the transmitter side for code adaptation, where the binary data are adaptively encoded with three types of low-density parity-check (LDPC) codes with code rates of 0.8, 0.75, and 0.7 of large girth. The total code-adaptation latency is measured to be 2273 ms. Compared with transmission without adaptation, average net capacity improvements of 102%, 36%, and 7.5% are obtained, respectively, by adaptive LDPC coding.

  12. Adaptive discrete cosine transform coding algorithm for digital mammography

    Science.gov (United States)

    Baskurt, Atilla M.; Magnin, Isabelle E.; Goutte, Robert

    1992-09-01

    The need for storage, transmission, and archiving of medical images has led researchers to develop adaptive and efficient data compression techniques. Among medical images, x-ray radiographs of the breast are especially difficult to process because of their particularly low contrast and very fine structures. A block adaptive coding algorithm based on the discrete cosine transform to compress digitized mammograms is described. A homogeneous repartition of the degradation in the decoded images is obtained using a spatially adaptive threshold. This threshold depends on the coding error associated with each block of the image. The proposed method is tested on a limited number of pathological mammograms including opacities and microcalcifications. A comparative visual analysis is performed between the original and the decoded images. Finally, it is shown that data compression with rather high compression rates (11 to 26) is possible in the mammography field.

  13. Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures

    Science.gov (United States)

    Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314

  14. Satellite Media Broadcasting with Adaptive Coding and Modulation

    Directory of Open Access Journals (Sweden)

    Georgios Gardikis

    2009-01-01

    Full Text Available Adaptive Coding and Modulation (ACM is a feature incorporated into the DVB-S2 satellite specification, allowing real-time adaptation of transmission parameters according to the link conditions. Although ACM was originally designed for optimizing unicast services, this article discusses the expansion of its usage to broadcasting streams as well. For this purpose, a general cross-layer adaptation approach is proposed, along with its realization into a fully functional experimental network, and test results are presented. Finally, two case studies are analysed, assessing the gain derived by ACM in a real large-scale deployment, involving HD services provision to two different geographical areas.

  15. Adaptation of radiation shielding code to space environment

    International Nuclear Information System (INIS)

    Okuno, Koichi; Hara, Akihisa

    1992-01-01

    Recently, the trend to the development of space has heightened. To the development of space, many problems are related, and as one of them, there is the protection from cosmic ray. The cosmic ray is the radiation having ultrahigh energy, and there was not the radiation shielding design code that copes with cosmic ray so far. Therefore, the high energy radiation shielding design code for accelerators was improved so as to cope with the peculiarity that cosmic ray possesses. Moreover, the calculation of the radiation dose equivalent rate in the moon base to which the countermeasures against cosmic ray were taken was simulated by using the improved code. As the important countermeasures for the safety protection from radiation, the covering with regolith is carried out, and the effect of regolith was confirmed by using the improved code. Galactic cosmic ray, solar flare particles, radiation belt, the adaptation of the radiation shielding code HERMES to space environment, the improvement of the three-dimensional hadron cascade code HETCKFA-2 and the electromagnetic cascade code EGS 4-KFA, and the cosmic ray simulation are reported. (K.I.)

  16. Adaptive under relaxation factor of MATRA code for the efficient whole core analysis

    International Nuclear Information System (INIS)

    Kwon, Hyuk; Kim, S. J.; Seo, K. W.; Hwang, D. H.

    2013-01-01

    Such nonlinearities are handled in MATRA code using outer iteration with Picard scheme. The Picard scheme involves successive updating of the coefficient matrix based on the previously calculated values. The scheme is a simple and effective method for the nonlinear problem but the effectiveness greatly depends on the under-relaxing capability. Accuracy and speed of calculation are very sensitively dependent on the under-relaxation factor in outer-iteration updating the axial mass flow using the continuity equation. The under-relaxation factor in MATRA is generally utilized with a fixed value that is empirically determined. Adapting the under-relaxation factor to the outer iteration is expected to improve the calculation effectiveness of MATRA code rather than calculation with the fixed under-relaxation factor. The present study describes the implementation of adaptive under-relaxation within the subchannel code MATRA. Picard iterations with adaptive under-relaxation can accelerate the convergence for mass conservation in subchannel code MATRA. The most efficient approach for adaptive under relaxation appears to be very problem dependent

  17. Least-Square Prediction for Backward Adaptive Video Coding

    Directory of Open Access Journals (Sweden)

    Li Xin

    2006-01-01

    Full Text Available Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP, and demonstrate its potential in video coding. Motivated by the duality between edge contour in images and motion trajectory in video, we propose to derive the best prediction of the current frame from its causal past using least-square method. It is demonstrated that LSP is particularly effective for modeling video material with slow motion and can be extended to handle fast motion by temporal warping and forward adaptation. For typical QCIF test sequences, LSP often achieves smaller MSE than , full-search, quarter-pel block matching algorithm (BMA without the need of transmitting any overhead.

  18. The FORTRAN NALAP code adapted to a microcomputer compiler

    International Nuclear Information System (INIS)

    Lobo, Paulo David de Castro; Borges, Eduardo Madeira; Braz Filho, Francisco Antonio; Guimaraes, Lamartine Nogueira Frutuoso

    2010-01-01

    The Nuclear Energy Division of the Institute for Advanced Studies (IEAv) is conducting the TERRA project (TEcnologia de Reatores Rapidos Avancados), Technology for Advanced Fast Reactors project, aimed at a space reactor application. In this work, to attend the TERRA project, the NALAP code adapted to a microcomputer compiler called Compaq Visual Fortran (Version 6.6) is presented. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic responses for sodium cooled fast reactors. The strategy to run the code in a PC was divided in some steps mainly to remove unnecessary routines, to eliminate old statements, to introduce new ones and also to include extension precision mode. The source program was able to solve three sample cases under conditions of protected transients suggested in literature: the normal reactor shutdown, with a delay of 200 ms to start the control rod movement and a delay of 500 ms to stop the pumps; reactor scram after transient of loss of flow; and transients protected from overpower. Comparisons were made with results from the time when the NALAP code was acquired by the IEAv, back in the 80's. All the responses for these three simulations reproduced the calculations performed with the CDC compiler in 1985. Further modifications will include the usage of gas as coolant for the nuclear reactor to allow a Closed Brayton Cycle Loop - CBCL - to be used as a heat/electric converter. (author)

  19. The FORTRAN NALAP code adapted to a microcomputer compiler

    Energy Technology Data Exchange (ETDEWEB)

    Lobo, Paulo David de Castro; Borges, Eduardo Madeira; Braz Filho, Francisco Antonio; Guimaraes, Lamartine Nogueira Frutuoso, E-mail: plobo.a@uol.com.b, E-mail: eduardo@ieav.cta.b, E-mail: fbraz@ieav.cta.b, E-mail: guimarae@ieav.cta.b [Instituto de Estudos Avancados (IEAv/CTA), Sao Jose dos Campos, SP (Brazil)

    2010-07-01

    The Nuclear Energy Division of the Institute for Advanced Studies (IEAv) is conducting the TERRA project (TEcnologia de Reatores Rapidos Avancados), Technology for Advanced Fast Reactors project, aimed at a space reactor application. In this work, to attend the TERRA project, the NALAP code adapted to a microcomputer compiler called Compaq Visual Fortran (Version 6.6) is presented. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic responses for sodium cooled fast reactors. The strategy to run the code in a PC was divided in some steps mainly to remove unnecessary routines, to eliminate old statements, to introduce new ones and also to include extension precision mode. The source program was able to solve three sample cases under conditions of protected transients suggested in literature: the normal reactor shutdown, with a delay of 200 ms to start the control rod movement and a delay of 500 ms to stop the pumps; reactor scram after transient of loss of flow; and transients protected from overpower. Comparisons were made with results from the time when the NALAP code was acquired by the IEAv, back in the 80's. All the responses for these three simulations reproduced the calculations performed with the CDC compiler in 1985. Further modifications will include the usage of gas as coolant for the nuclear reactor to allow a Closed Brayton Cycle Loop - CBCL - to be used as a heat/electric converter. (author)

  20. ICAN Computer Code Adapted for Building Materials

    Science.gov (United States)

    Murthy, Pappu L. N.

    1997-01-01

    The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.

  1. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    Science.gov (United States)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  2. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  3. Development of a Zero-Dimensional Particle Generation Model in SFR-Containments under Accidental Conditions

    Energy Technology Data Exchange (ETDEWEB)

    García, M.; Herranz, L.E.

    2015-07-01

    During postulated Beyond Design Basis Accidents (BDBAs) in Sodium-cooled Fast Reactors (SFRs), contaminated-sodium at high temperature may leak into the containment and burns in the presence of oxygen. As a result, large quantities of sodium oxide aerosols are produced. In the frame of the EU-JASMIN project, a particle generation model to calculate the particle generation rate and their primary size during a generic sodium pool fire has been developed to be implemented in ASTEC-Na code. This paper presents the adaptation of the 3-D particle generation model to a 0-D model based on the generation of particles under average system conditions. Deviations between both approaches less than 20% have been found in all the simulated scenarios. From the 0-D model, simple correlations for the particle generation rate and the primary particle size as a function of Na-oxide vapour pressures, temperature and sodium pool characteristics have been derived for its straightforward implementation in the ASTEC-Na code. (Author)

  4. Adaptive bit plane quadtree-based block truncation coding for image compression

    Science.gov (United States)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  5. Adaption of the PARCS Code for Core Design Audit Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyong Chol; Lee, Young Jin; Uhm, Jae Beop; Kim, Hyunjik [Nuclear Safety Evaluation, Daejeon (Korea, Republic of); Jeong, Hun Young; Ahn, Seunghoon; Woo, Swengwoong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2013-05-15

    The eigenvalue calculation also includes quasi-static core depletion analyses. PARCS has implemented variety of features and has been qualified as a regulatory audit code in conjunction with other NRC thermal-hydraulic codes such as TRACE or RELAP5. In this study, as an adaptation effort for audit applications, PARCS is applied for an audit analysis of a reload core design. The lattice physics code HELIOS is used for cross section generation. PARCS-HELIOS code system has been established as a core analysis tool. Calculation results have been compared on a wide spectrum of calculations such as power distribution, critical soluble boron concentration, and rod worth. A reasonable agreement between the audit calculation and the reference results has been found.

  6. Use of sensitivity-information for the adaptive simulation of thermo-hydraulic system codes

    International Nuclear Information System (INIS)

    Kerner, Alexander M.

    2011-01-01

    Within the scope of this thesis the development of methods for online-adaptation of dynamical plant simulations of a thermal-hydraulic system code to measurement data is depicted. The described approaches are mainly based on the use of sensitivity-information in different areas: statistical sensitivity measures are used for the identification of the parameters to be adapted and online-sensitivities for the parameter adjustment itself. For the parameter adjustment the method of a ''system-adapted heuristic adaptation with partial separation'' (SAHAT) was developed, which combines certain variants of parameter estimation and control with supporting procedures to solve the basic problems. The applicability of the methods is shown by adaptive simulations of a PKL-III experiment and by selected transients in a nuclear power plant. Finally the main perspectives for the application of a tracking simulator on a system code are identified.

  7. Validation of the RALOC-mod.4 thermal-hydraulics code on evaporation transients in the Phebus containment

    International Nuclear Information System (INIS)

    Spitz, P.B.; Lemoine, F.; Tirini, S.

    1997-01-01

    IPSN (Nuclear Protection and Safety Institute) and GRS (Gesellschaft fur Anlagen und Reaktorsicherheit Schwertnergasse 1) are developing the ESCADRE-ASTEC systems of codes devoted to the prediction of the behaviour of water-cooled reactors during a severe accident. The RALOC-mod 4 code belongs to this system and is specifically devoted to containment thermal-hydraulics studies. IPSN has designed a Thermal Hydraulic Containment Test Program in support to the Phebus Fission Product Test Program/2/. Evaporation tests have been recently performed in the Phebus containment test facility. The objective of this work is to assess against these tests the capability of the RALOC -mod 4 code to capture the phenomena observed in these experiments and more particularly the evaporation heat transfer and wall heat transfers. (DM)

  8. Adaptive coded aperture imaging in the infrared: towards a practical implementation

    Science.gov (United States)

    Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley

    2008-08-01

    An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.

  9. Adaptive Space–Time Coding Using ARQ

    KAUST Repository

    Makki, Behrooz

    2015-09-01

    We study the energy-limited outage probability of the block space-time coding (STC)-based systems utilizing automatic repeat request (ARQ) feedback and adaptive power allocation. Taking the ARQ feedback costs into account, we derive closed-form solutions for the energy-limited optimal power allocation and investigate the diversity gain of different STC-ARQ schemes. In addition, sufficient conditions are derived for the usefulness of ARQ in terms of energy-limited outage probability. The results show that, for a large range of feedback costs, the energy efficiency is substantially improved by the combination of ARQ and STC techniques if optimal power allocation is utilized. © 2014 IEEE.

  10. Adjuvant external beam radiotherapy in the treatment of endometrial cancer (MRC ASTEC and NCIC CTG EN.5 randomised trials): pooled trial results, systematic review, and meta-analysis.

    Science.gov (United States)

    Blake, P; Swart, Ann Marie; Orton, J; Kitchener, H; Whelan, T; Lukka, H; Eisenhauer, E; Bacon, M; Tu, D; Parmar, M K B; Amos, C; Murray, C; Qian, W

    2009-01-10

    Early endometrial cancer with low-risk pathological features can be successfully treated by surgery alone. External beam radiotherapy added to surgery has been investigated in several small trials, which have mainly included women at intermediate risk of recurrence. In these trials, postoperative radiotherapy has been shown to reduce the risk of isolated local recurrence but there is no evidence that it improves recurrence-free or overall survival. We report the findings from the ASTEC and EN.5 trials, which investigated adjuvant external beam radiotherapy in women with early-stage disease and pathological features suggestive of intermediate or high risk of recurrence and death from endometrial cancer. Between July, 1996, and March, 2005, 905 (789 ASTEC, 116 EN.5) women with intermediate-risk or high-risk early-stage disease from 112 centres in seven countries (UK, Canada, Poland, Norway, New Zealand, Australia, USA) were randomly assigned after surgery to observation (453) or to external beam radiotherapy (452). A target dose of 40-46 Gy in 20-25 daily fractions to the pelvis, treating five times a week, was specified. Primary outcome measure was overall survival, and all analyses were by intention to treat. These trials were registered ISRCTN 16571884 (ASTEC) and NCT 00002807 (EN.5). After a median follow-up of 58 months, 135 women (68 observation, 67 external beam radiotherapy) had died. There was no evidence that overall survival with external beam radiotherapy was better than observation, hazard ratio 1.05 (95% CI 0.75-1.48; p=0.77). 5-year overall survival was 84% in both groups. Combining data from ASTEC and EN.5 in a meta-analysis of trials confirmed that there was no benefit in terms of overall survival (hazard ratio 1.04; 95% CI 0.84-1.29) and can reliably exclude an absolute benefit of external beam radiotherapy at 5 years of more than 3%. With brachytherapy used in 53% of women in ASTEC/EN.5, the local recurrence rate in the observation group at 5 years

  11. Modeling of fission product release in integral codes

    International Nuclear Information System (INIS)

    Obaidurrahman, K.; Raman, Rupak K.; Gaikwad, Avinash J.

    2014-01-01

    The Great Tohoku earthquake and tsunami that stroke the Fukushima-Daiichi nuclear power station in March 11, 2011 has intensified the needs of detailed nuclear safety research and with this objective all streams associated with severe accident phenomenology are being revisited thoroughly. The present paper would cover an overview of state of art FP release models being used, the important phenomenon considered in semi-mechanistic models and knowledge gaps in present FP release modeling. Capability of FP release module, ELSA of ASTEC integral code in appropriate prediction of FP release under several diversified core degraded conditions will also be demonstrated. Use of semi-mechanistic fission product release models at AERB in source-term estimation shall be briefed. (author)

  12. An Adaptive Motion Estimation Scheme for Video Coding

    Directory of Open Access Journals (Sweden)

    Pengyu Liu

    2014-01-01

    Full Text Available The unsymmetrical-cross multihexagon-grid search (UMHexagonS is one of the best fast Motion Estimation (ME algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised.

  13. Simulation of hydrogen deflagration experiment – Benchmark exercise with lumped-parameter codes

    Energy Technology Data Exchange (ETDEWEB)

    Kljenak, Ivo, E-mail: ivo.kljenak@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Kuznetsov, Mikhail, E-mail: mike.kuznetsov@kit.edu [Karlsruhe Institute of Technology, Kaiserstraße 12, 76131 Karlsruhe (Germany); Kostka, Pal, E-mail: kostka@nubiki.hu [NUBIKI Nuclear Safety Research Institute, Konkoly-Thege Miklós út 29-33, 1121 Budapest (Hungary); Kubišova, Lubica, E-mail: lubica.kubisova@ujd.gov.sk [Nuclear Regulatory Authority of the Slovak Republic, Bajkalská 27, 82007 Bratislava (Slovakia); Maltsev, Mikhail, E-mail: maltsev_MB@aep.ru [JSC Atomenergoproekt, 1, st. Podolskykh Kursantov, Moscow (Russian Federation); Manzini, Giovanni, E-mail: giovanni.manzini@rse-web.it [Ricerca sul Sistema Energetico, Via Rubattino 54, 20134 Milano (Italy); Povilaitis, Mantas, E-mail: mantas.p@mail.lei.lt [Lithuania Energy Institute, Breslaujos g.3, 44403 Kaunas (Lithuania)

    2015-03-15

    Highlights: • Blind and open simulations of hydrogen combustion experiment in large-scale containment-like facility with different lumped-parameter codes. • Simulation of axial as well as radial flame propagation. • Confirmation of adequacy of lumped-parameter codes for safety analyses of actual nuclear power plants. - Abstract: An experiment on hydrogen deflagration (Upward Flame Propagation Experiment – UFPE) was proposed by the Jozef Stefan Institute (Slovenia) and performed in the HYKA A2 facility at the Karlsruhe Institute of Technology (Germany). The experimental results were used to organize a benchmark exercise for lumped-parameter codes. Six organizations (JSI, AEP, LEI, NUBIKI, RSE and UJD SR) participated in the benchmark exercise, using altogether four different computer codes: ANGAR, ASTEC, COCOSYS and ECART. Both blind and open simulations were performed. In general, all the codes provided satisfactory results of the pressure increase, whereas the results of the temperature show a wider dispersal. Concerning the flame axial and radial velocities, the results may be considered satisfactory, given the inherent simplification of the lumped-parameter description compared to the local instantaneous description.

  14. Simulation of hydrogen deflagration experiment – Benchmark exercise with lumped-parameter codes

    International Nuclear Information System (INIS)

    Kljenak, Ivo; Kuznetsov, Mikhail; Kostka, Pal; Kubišova, Lubica; Maltsev, Mikhail; Manzini, Giovanni; Povilaitis, Mantas

    2015-01-01

    Highlights: • Blind and open simulations of hydrogen combustion experiment in large-scale containment-like facility with different lumped-parameter codes. • Simulation of axial as well as radial flame propagation. • Confirmation of adequacy of lumped-parameter codes for safety analyses of actual nuclear power plants. - Abstract: An experiment on hydrogen deflagration (Upward Flame Propagation Experiment – UFPE) was proposed by the Jozef Stefan Institute (Slovenia) and performed in the HYKA A2 facility at the Karlsruhe Institute of Technology (Germany). The experimental results were used to organize a benchmark exercise for lumped-parameter codes. Six organizations (JSI, AEP, LEI, NUBIKI, RSE and UJD SR) participated in the benchmark exercise, using altogether four different computer codes: ANGAR, ASTEC, COCOSYS and ECART. Both blind and open simulations were performed. In general, all the codes provided satisfactory results of the pressure increase, whereas the results of the temperature show a wider dispersal. Concerning the flame axial and radial velocities, the results may be considered satisfactory, given the inherent simplification of the lumped-parameter description compared to the local instantaneous description

  15. QOS-aware error recovery in wireless body sensor networks using adaptive network coding.

    Science.gov (United States)

    Razzaque, Mohammad Abdur; Javadi, Saeideh S; Coulibaly, Yahaya; Hira, Muta Tah

    2014-12-29

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.

  16. New adaptive differencing strategy in the PENTRAN 3-d parallel Sn code

    International Nuclear Information System (INIS)

    Sjoden, G.E.; Haghighat, A.

    1996-01-01

    It is known that three-dimensional (3-D) discrete ordinates (S n ) transport problems require an immense amount of storage and computational effort to solve. For this reason, parallel codes that offer a capability to completely decompose the angular, energy, and spatial domains among a distributed network of processors are required. One such code recently developed is PENTRAN, which iteratively solves 3-D multi-group, anisotropic S n problems on distributed-memory platforms, such as the IBM-SP2. Because large problems typically contain several different material zones with various properties, available differencing schemes should automatically adapt to the transport physics in each material zone. To minimize the memory and message-passing overhead required for massively parallel S n applications, available differencing schemes in an adaptive strategy should also offer reasonable accuracy and positivity, yet require only the zeroth spatial moment of the transport equation; differencing schemes based on higher spatial moments, in spite of their greater accuracy, require at least twice the amount of storage and communication cost for implementation in a massively parallel transport code. This paper discusses a new adaptive differencing strategy that uses increasingly accurate schemes with low parallel memory and communication overhead. This strategy, implemented in PENTRAN, includes a new scheme, exponential directional averaged (EDA) differencing

  17. Containment response to a severe accident (TMLB sequence) with and without mitigation strategies

    International Nuclear Information System (INIS)

    Passalacqua, R.

    2004-01-01

    A loss of SG feed-water (TMLB sequence) for a prototypic PWR 900 MWe with a multi-compartment configuration (with 11 and 16 cells nodalization) has been calculated by the author using the ASTEC code in the frame of the EVITA project (5th Framework Programme, FWP). A variety of hypothesis (e.g. activation of sprays and hydrogen recombiners) and possible consequences of these assumptions (cavity flooding, hydrogen combustion, etc.) have been made in order to evaluate the global reactor containment building response (pressure, aerosol/FP concentration, etc.). The need to dispose of severe accident management guidelines (SAMGs) is increasing. These guidelines are meant for nuclear plants' operators in order to allow them to apply mitigation strategies all along a severe accident, which, only in its initial phase, may last several days. The purpose of this paper is to outline the influence on the containment load of most common accident occurrences and operators actions, which is essential in establishing SAMGs. ASTEC (Accident Source Term Evaluation Code) is a computer code for the evaluation of the consequences of a postulated nuclear plant severe accident sequence. ASTEC is a computer tool currently under joint development by the Institut de Radioprotection et de Surete Nucleaire (IRSN), France, and Gesellschaft fuer Anlagen-und Reaktorsicherheit (GRS), Germany. The aim of the development is to create a fast running integral code package, reliable in all simulations of a severe accident, to be used for level-2 PSA analysis. It must be said that several recent developments have significantly improved the best-estimate models of ASTEC and a new version (ASTEC V1.0) has been released mid-2002. Nevertheless, the somehow obsolete ASTECv0.3 version here used, has given results very useful for the estimation of the global risk of a nuclear plant. Moreover, under the current 6th FWP (Sustainable Integration of EU Research on Severe Accident Phenomenology and Management), the

  18. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    Science.gov (United States)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  19. Quadrature amplitude modulation from basics to adaptive trellis-coded turbo-equalised and space-time coded OFDM CDMA and MC-CDMA systems

    CERN Document Server

    Hanzo, Lajos

    2004-01-01

    "Now fully revised and updated, with more than 300 pages of new material, this new edition presents the wide range of recent developments in the field and places particular emphasis on the family of coded modulation aided OFDM and CDMA schemes. In addition, it also includes a fully revised chapter on adaptive modulation and a new chapter characterizing the design trade-offs of adaptive modulation and space-time coding." "In summary, this volume amalgamates a comprehensive textbook with a deep research monograph on the topic of QAM, ensuring it has a wide-ranging appeal for both senior undergraduate and postgraduate students as well as practicing engineers and researchers."--Jacket.

  20. Overall simulation of a HTGR plant with the gas adapted MANTA code

    International Nuclear Information System (INIS)

    Emmanuel Jouet; Dominique Petit; Robert Martin

    2005-01-01

    Full text of publication follows: AREVA's subsidiary Framatome ANP is developing a Very High Temperature Reactor nuclear heat source that can be used for electricity generation as well as cogeneration including hydrogen production. The selected product has an indirect cycle architecture which is easily adapted to all possible uses of the nuclear heat source. The coupling to the applications is implemented through an Intermediate Heat exchanger. The system code chosen to calculate the steady-state and transient behaviour of the plant is based on the MANTA code. The flexible and modular MANTA code that is originally a system code for all non LOCA PWR plant transients, has been the subject of new developments to simulate all the forced convection transients of a nuclear plant with a gas cooled High Temperature Reactor including specific core thermal hydraulics and neutronics modelizations, gas and water steam turbomachinery and control structure. The gas adapted MANTA code version is now able to model a total HTGR plant with a direct Brayton cycle as well as indirect cycles. To validate these new developments, a modelization with the MANTA code of a real plant with direct Brayton cycle has been performed and steady-states and transients compared with recorded thermal hydraulic measures. Finally a comparison with the RELAP5 code has been done regarding transient calculations of the AREVA indirect cycle HTR project plant. Moreover to improve the user-friendliness in order to use MANTA as a systems conception, optimization design tool as well as a plant simulation tool, a Man- Machine-Interface is available. Acronyms: MANTA Modular Advanced Neutronic and Thermal hydraulic Analysis; HTGR High Temperature Gas-Cooled Reactor. (authors)

  1. Assessment of W7-X plasma vessel pressurisation in case of LOCA taking into account in-vessel components

    Energy Technology Data Exchange (ETDEWEB)

    Urbonavičius, E., E-mail: Egidijus.Urbonavicius@lei.lt; Povilaitis, M., E-mail: Mantas.Povilaitis@lei.lt; Kontautas, A., E-mail: Aurimas.Kontautas@lei.lt

    2015-11-15

    Highlights: • Analysis of the vacuum vessel response to the LOCA in W7-X was performed using lumped-parameter codes COCOSYS and ASTEC. • Benchmarking of the results received with two codes provides more confidence in results and helps in identification of possible important differences in the modelling. • The performed analysis answered the questions set in the installed plasma vessel venting system during overpressure of PV in case of 40 mm diameter LOCA in “baking” mode. • Differences in time until opening the burst disk observed in ASTEC and COCOSYS results are caused by differences in heat transfer modelling. - Abstract: This paper presents the analysis of W7-X vacuum vessel response taking into account in-vessel components. A detailed analysis of the vacuum vessel response to the loss of coolant accident was performed using lumped-parameter codes COCOSYS and ASTEC. The performed analysis showed that the installed plasma vessel venting system prevents overpressure of PV in case of 40 mm diameter LOCA in “baking” mode. The performed analysis revealed differences in heat transfer modelling implemented in ASTEC and COCOSYS computer codes, which require further investigation to justify the correct approach for application to fusion facilities.

  2. Assessment of W7-X plasma vessel pressurisation in case of LOCA taking into account in-vessel components

    International Nuclear Information System (INIS)

    Urbonavičius, E.; Povilaitis, M.; Kontautas, A.

    2015-01-01

    Highlights: • Analysis of the vacuum vessel response to the LOCA in W7-X was performed using lumped-parameter codes COCOSYS and ASTEC. • Benchmarking of the results received with two codes provides more confidence in results and helps in identification of possible important differences in the modelling. • The performed analysis answered the questions set in the installed plasma vessel venting system during overpressure of PV in case of 40 mm diameter LOCA in “baking” mode. • Differences in time until opening the burst disk observed in ASTEC and COCOSYS results are caused by differences in heat transfer modelling. - Abstract: This paper presents the analysis of W7-X vacuum vessel response taking into account in-vessel components. A detailed analysis of the vacuum vessel response to the loss of coolant accident was performed using lumped-parameter codes COCOSYS and ASTEC. The performed analysis showed that the installed plasma vessel venting system prevents overpressure of PV in case of 40 mm diameter LOCA in “baking” mode. The performed analysis revealed differences in heat transfer modelling implemented in ASTEC and COCOSYS computer codes, which require further investigation to justify the correct approach for application to fusion facilities.

  3. Containment analysis on the PHEBUS FPT-0, FPT-1 and FPT-2 experiments

    International Nuclear Information System (INIS)

    Gyenes, Gyorgy; Ammirabile, Luca

    2011-01-01

    Research highlights: → The CPA/ASTEC code can reproduce similar patterns of CFD-based codes. → The deposition on elliptic bottom and on the painted wet condenser are qualitatively predicted. → The gas circulation affects the quick mixing of aerosols in the containment atmosphere. → The flow fields in CPA/ASTEC have a medium impact on the airborne mass in the PHEBUS containment. - Abstract: In a severe accident, most of the fission-product species are already condensed in aerosols when they are released to the containment. The behaviour of these aerosol particles controls the fission-product transport into the containment and affects the global Source Term. The calculations presented here were performed using the CPA module (Containment Package implemented in the European integral code ASTEC) for the in-pile PHEBUS FPT-0, FPT-1 and FPT-2 experiments and are focused on the aerosol transport. A detailed thermal-hydraulic model was used in the CPA/ASTEC code to evaluate the gas circulation pattern in the closed containment volume. The comparison of ASTEC results showed that the patterns are similar to the ones predicted by the CFD-based codes. Good agreement was reached with the measured average thermo-hydraulic parameters such as containment gas pressure, temperature and the condensation rate on the condensers. The calculations with the detailed simulation of the flow in the PHEBUS containment qualitatively predicted the particle settling on the elliptic bottom and deposition on the painted wet condenser surfaces. It was shown that the influence of the gas circulation leads to a relatively quick mixing of aerosols in the containment atmosphere. In the tests investigated, the effect of the gas circulation on the airborne aerosol mass during the aerosol injection period is small because the injected mass flux is significantly higher compared to the deposition fluxes on the vessel surfaces. During the long-term aerosol deposition phase, the flow fields predicted

  4. NATO-ASTEC-matrix-research environment, information sharing and MCA

    International Nuclear Information System (INIS)

    Apikyan, S.; Yerznkanyan, K.; Diamond, D.; Vardanyan, M.; Sevikyan, G.

    2010-01-01

    The successful implementations of the NATO-ASTECMATRIX project in Armenia are essential contribution into security, stability and solidarity among regional nations, by applying the best technical expertise to problem solving. Collaboration, networking and capacity-building are means used to accomplish these goals. A further aim is to promote the co-operation with new partners and the ASTEC are creating links between scientists and organizations in formerly separated communities, developing new strategy concentrating support on security related collaborative projects and finding answers to critical questions and a way of connecting nations. The NATO-ASTECMATRIX within Armenia leads to a network of high standards laboratories that will drastically improve the overview and the technical infrastructure for monitoring, accounting and control of CBRN materials in the Armenia. This new infrastructure will enhance the exchange of information on this vital issue via the IRIS. In follow-up phases, it will also help to better define the needs and requirements for a policy to enhance legal tools for the management of these materials, and for the creation of one or several agencies aiming at dealing with wastes or no longer useful materials containing CBRN components in Armenia

  5. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    Directory of Open Access Journals (Sweden)

    Mohammad Abdur Razzaque

    2014-12-01

    Full Text Available Wireless body sensor networks (WBSNs for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS, in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network’s QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.

  6. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    Science.gov (United States)

    Razzaque, Mohammad Abdur; Javadi, Saeideh S.; Coulibaly, Yahaya; Hira, Muta Tah

    2015-01-01

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485

  7. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    International Nuclear Information System (INIS)

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-01-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096 3 effective resolution and 16 GPUs with 8192 3 effective resolution, respectively.

  8. Coolability in the frame of core melt accidents in light water reactors. Model development and validation for ATHLET-CD and ASTEC. Final report; Kuehlbarkeit im Rahmen von Kernschmelzunfaellen bei Leichtwasserreaktoren. Modellentwicklung und Validierung fuer ATHLET-CD und ASTEC. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Buck, Michael; Pohlner, Georg; Rahman, Saidur; Berkhan, Ana

    2015-07-15

    The code system ATHLET/ATHLET-CD is being developed in the frame of the reactor safety research of the German Federal Ministry for Economic Affairs and Energy (BMWi) within the topic analysis of transients and accident sequences. It serves for simulation of transients and accidents to be used in safety analyses for light water reactors. In the present project the development and validation of models for ATHLET-CD for description of the processes during severe accidents are continued. These works should enable broad safety analyses by a mechanistic description of the processes even during late phases of a degrading core and by this a profound estimation on coolability and accident management options during every phase. With the actual status of modelling in ATHLET-CD analyses on coolability are made to give a solid base for estimates about stabilization by cooling or accident progression, dependent on the scenario. The modeling in the MEWA module, describing the processes in a severely degraded core in ATHLET-CD, is extended on the processes in the lower plenum. For this, the model on melt pool behavior is extended and linked to the RPV wall. The coupling between MEWA and the thermal-hydraulics of ATHLET-CD is improved. The validation of the models is continued by calculations on new experiments and comparing analyses done in the frame of the European Network SARNET-2. For the European integral code ASTEC contributions from the modeling for ATHLET-CD will be done, especially by providing a model for the melt behavior in the lower plenum of a LWR. This report illustrates the work carried out in the frame of this project, and shows results of calculations and the status of validation by recalculations on experiments for debris bed coolability, melt pool behavior as well as jet fragmentation and debris bed formation.

  9. Design and Analysis of Adaptive Message Coding on LDPC Decoder with Faulty Storage

    Directory of Open Access Journals (Sweden)

    Guangjun Ge

    2018-01-01

    Full Text Available Unreliable message storage severely degrades the performance of LDPC decoders. This paper discusses the impacts of message errors on LDPC decoders and schemes improving the robustness. Firstly, we develop a discrete density evolution analysis for faulty LDPC decoders, which indicates that protecting the sign bits of messages is effective enough for finite-precision LDPC decoders. Secondly, we analyze the effects of quantization precision loss for static sign bit protection and propose an embedded dynamic coding scheme by adaptively employing the least significant bits (LSBs to protect the sign bits. Thirdly, we give a construction of Hamming product code for the adaptive coding and present low complexity decoding algorithms. Theoretic analysis indicates that the proposed scheme outperforms traditional triple modular redundancy (TMR scheme in decoding both threshold and residual errors, while Monte Carlo simulations show that the performance loss is less than 0.2 dB when the storage error probability varies from 10-3 to 10-4.

  10. Autistic traits are linked to reduced adaptive coding of face identity and selectively poorer face recognition in men but not women.

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Ewing, Louise

    2013-11-01

    Our ability to discriminate and recognize thousands of faces despite their similarity as visual patterns relies on adaptive, norm-based, coding mechanisms that are continuously updated by experience. Reduced adaptive coding of face identity has been proposed as a neurocognitive endophenotype for autism, because it is found in autism and in relatives of individuals with autism. Autistic traits can also extend continuously into the general population, raising the possibility that reduced adaptive coding of face identity may be more generally associated with autistic traits. In the present study, we investigated whether adaptive coding of face identity decreases as autistic traits increase in an undergraduate population. Adaptive coding was measured using face identity aftereffects, and autistic traits were measured using the Autism-Spectrum Quotient (AQ) and its subscales. We also measured face and car recognition ability to determine whether autistic traits are selectively related to face recognition difficulties. We found that men who scored higher on levels of autistic traits related to social interaction had reduced adaptive coding of face identity. This result is consistent with the idea that atypical adaptive face-coding mechanisms are an endophenotype for autism. Autistic traits were also linked with face-selective recognition difficulties in men. However, there were some unexpected sex differences. In women, autistic traits were linked positively, rather than negatively, with adaptive coding of identity, and were unrelated to face-selective recognition difficulties. These sex differences indicate that autistic traits can have different neurocognitive correlates in men and women and raise the intriguing possibility that endophenotypes of autism can differ in males and females. © 2013 Elsevier Ltd. All rights reserved.

  11. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Science.gov (United States)

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  12. CosmosDG: An hp-adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    Science.gov (United States)

    Anninos, Peter; Bryant, Colton; Fragile, P. Chris; Holgado, A. Miguel; Lau, Cheuk; Nemergut, Daniel

    2017-08-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge-Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  13. CosmosDG: An hp -adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    Energy Technology Data Exchange (ETDEWEB)

    Anninos, Peter; Lau, Cheuk [Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, CA 94550 (United States); Bryant, Colton [Department of Engineering Sciences and Applied Mathematics, Northwestern University, 2145 Sheridan Road, Evanston, Illinois, 60208 (United States); Fragile, P. Chris [Department of Physics and Astronomy, College of Charleston, 66 George Street, Charleston, SC 29424 (United States); Holgado, A. Miguel [Department of Astronomy and National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, Urbana, Illinois, 61801 (United States); Nemergut, Daniel [Operations and Engineering Division, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States)

    2017-08-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge–Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  14. CosmosDG: An hp -adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    International Nuclear Information System (INIS)

    Anninos, Peter; Lau, Cheuk; Bryant, Colton; Fragile, P. Chris; Holgado, A. Miguel; Nemergut, Daniel

    2017-01-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge–Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  15. Adaptive Multi-Layered Space-Time Block Coded Systems in Wireless Environments

    KAUST Repository

    Al-Ghadhban, Samir

    2014-12-23

    © 2014, Springer Science+Business Media New York. Multi-layered space-time block coded systems (MLSTBC) strike a balance between spatial multiplexing and transmit diversity. In this paper, we analyze the block error rate performance of MLSTBC. In addition, we propose an adaptive MLSTBC schemes that are capable of accommodating the channel signal-to-noise ratio variation of wireless systems by near instantaneously adapting the uplink transmission configuration. The main results demonstrate that significant effective throughput improvements can be achieved while maintaining a certain target bit error rate.

  16. The Development of Severe Accident Codes at IRSN and Their Application to Support the Safety Assessment of EPR

    International Nuclear Information System (INIS)

    Caroli, Cataldo; Bleyer, Alexandre; Bentaib, Ahmed; Chatelard, Patrick; Cranga, Michel; Van Dorsselaere, Jean-Pierre

    2006-01-01

    IRSN uses a two-tier approach for development of codes analysing the course of a hypothetical severe accident (SA) in a Pressurized Water Reactor (PWR): on one hand, the integral code ASTEC, jointly developed by IRSN and GRS, for fast-running and complete analysis of a sequence; on the other hand, detailed codes for best-estimate analysis of some phenomena such as ICARE/CATHARE, MC3D (for steam explosion), CROCO and TONUS. They have been extensively used to support the level 2 Probabilistic Safety Assessment of the 900 MWe PWR and, in general, for the safety analysis of the French PWR. In particular the codes ICARE/CATHARE, CROCO, MEDICIS (module of ASTEC) and TONUS are used to support the safety assessment of the European Pressurized Reactor (EPR). The ICARE/CATHARE code system has been developed for the detailed evaluation of SA consequences in a PWR primary system. It is composed of the coupling of the core degradation IRSN code ICARE2 and of the thermal-hydraulics French code CATHARE2. The CFD code CROCO describes the corium flow in the spreading compartment. Heat transfer to the surrounding atmosphere and to the basemat, leading to the possible formation of an upper and lower crust, basemat ablation and gas sparging through the flow are modelled. CROCO has been validated against a wide experimental basis, including the CORINE, KATS and VULCANO programs. MEDICIS simulates MCCI (Molten-Corium-Concrete-Interaction) using a lumped-parameter approach. Its models are being continuously improved through the interpretation of most MCCI experiments (OECD-CCI, ACE...). The TONUS code has been developed by IRSN in collaboration with CEA for the analysis of the hydrogen risk (both distribution and combustion) in the reactor containment. The analyses carried out to support the EPR safety assessment are based on a CFD formulation. At this purpose a low-Mach number multi-component Navier-Stokes solver is used to analyse the hydrogen distribution. Presence of air, steam and

  17. Block-based wavelet transform coding of mammograms with region-adaptive quantization

    Science.gov (United States)

    Moon, Nam Su; Song, Jun S.; Kwon, Musik; Kim, JongHyo; Lee, ChoongWoong

    1998-06-01

    To achieve both high compression ratio and information preserving, it is an efficient way to combine segmentation and lossy compression scheme. Microcalcification in mammogram is one of the most significant sign of early stage of breast cancer. Therefore in coding, detection and segmentation of microcalcification enable us to preserve it well by allocating more bits to it than to other regions. Segmentation of microcalcification is performed both in spatial domain and in wavelet transform domain. Peak error controllable quantization step, which is off-line designed, is suitable for medical image compression. For region-adaptive quantization, block- based wavelet transform coding is adopted and different peak- error-constrained quantizers are applied to blocks according to the segmentation result. In view of preservation of microcalcification, the proposed coding scheme shows better performance than JPEG.

  18. Reliable channel-adapted error correction: Bacon-Shor code recovery from amplitude damping

    NARCIS (Netherlands)

    Á. Piedrafita (Álvaro); J.M. Renes (Joseph)

    2017-01-01

    textabstractWe construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve

  19. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  20. High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering

    Directory of Open Access Journals (Sweden)

    Nelson Eduardo Diaz

    2015-09-01

    Full Text Available The coded aperture snapshot spectral imaging system (CASSI is an imaging architecture which senses the three dimensional informa-tion of a scene with two dimensional (2D focal plane array (FPA coded projection measurements. A reconstruction algorithm takes advantage of the compressive measurements sparsity to recover the underlying 3D data cube. Traditionally, CASSI uses block-un-block coded apertures (BCA to spatially modulate the light. In CASSI the quality of the reconstructed images depends on the design of these coded apertures and the FPA dynamic range. This work presents a new CASSI architecture based on grayscaled coded apertu-res (GCA which reduce the FPA saturation and increase the dynamic range of the reconstructed images. The set of GCA is calculated in a real-time adaptive manner exploiting the information from the FPA compressive measurements. Extensive simulations show the attained improvement in the quality of the reconstructed images when GCA are employed.  In addition, a comparison between traditional coded apertures and GCA is realized with respect to noise tolerance.

  1. Simulation of the in-pile test Phebus-FPT3 using ASTEC V2 and ATHLET-CD 2.1A

    Energy Technology Data Exchange (ETDEWEB)

    Kruse, Philipp; Koch, Marco K. [Bochum Univ. (Germany). Chair of Energy Systems and Energy Economics

    2011-07-01

    The Phebus-FPT programme, initiated in 1988 by the 'Institut de Radioprotection et de Surete Nucleaire' (IRSN) and the Joint Research Centre (JRC) of the European Commission (EC), was performed in the Phebus facility operated by 'Commissariat a'Energie Atomique' (CEA). The facility represent a 900 MWe Pressurized Water Reactor (PWR) scaled down by a factor 1:5000 which objective is to study fuel degradation and the subsequent release, transport and retention of fission products, structure, control rod and fuel materials, in case of a severe accident. The Phebus-FPT programme consists of integral in-pile tests, varying the fuel burn-up and geometry, the control rod nature, the thermal hydraulic conditions in the bundle and through the experimental circuit as well as in the containment. In primary, the integral experiments should outline a detailed description of the main phenomena of core degradation, fission product release and transport as well as radionuclide interactions. Due to that it is possible to analyse the physical and chemical processes due to a severe accident. With the ascertained data, an evaluation of the accident management measures could be made as well. A secondary aim of the Phebus tests was to enable model development and evaluation of severe accident codes such like ASTEC and ATHLET-CD. (orig.)

  2. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    International Nuclear Information System (INIS)

    Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  3. Normalized value coding explains dynamic adaptation in the human valuation process.

    Science.gov (United States)

    Khaw, Mel W; Glimcher, Paul W; Louie, Kenway

    2017-11-28

    The notion of subjective value is central to choice theories in ecology, economics, and psychology, serving as an integrated decision variable by which options are compared. Subjective value is often assumed to be an absolute quantity, determined in a static manner by the properties of an individual option. Recent neurobiological studies, however, have shown that neural value coding dynamically adapts to the statistics of the recent reward environment, introducing an intrinsic temporal context dependence into the neural representation of value. Whether valuation exhibits this kind of dynamic adaptation at the behavioral level is unknown. Here, we show that the valuation process in human subjects adapts to the history of previous values, with current valuations varying inversely with the average value of recently observed items. The dynamics of this adaptive valuation are captured by divisive normalization, linking these temporal context effects to spatial context effects in decision making as well as spatial and temporal context effects in perception. These findings suggest that adaptation is a universal feature of neural information processing and offer a unifying explanation for contextual phenomena in fields ranging from visual psychophysics to economic choice.

  4. A study on climatic adaptation of dipteran mitochondrial protein coding genes

    Directory of Open Access Journals (Sweden)

    Debajyoti Kabiraj

    2017-10-01

    Full Text Available Diptera, the true flies are frequently found in nature and their habitat is found all over the world including Antarctica and Polar Regions. The number of documented species for order diptera is quite high and thought to be 14% of the total animal present in the earth [1]. Most of the study in diptera has focused on the taxa of economic and medical importance, such as the fruit flies Ceratitis capitata and Bactrocera spp. (Tephritidae, which are serious agricultural pests; the blowflies (Calliphoridae and oestrid flies (Oestridae, which can cause myiasis; the anopheles mosquitoes (Culicidae, are the vectors of malaria; and leaf-miners (Agromyzidae, vegetable and horticultural pests [2]. Insect mitochondrion consists of 13 protein coding genes, 22 tRNAs and 2 rRNAs, are the remnant portion of alpha-proteobacteria is responsible for simultaneous function of energy production and thermoregulation of the cell through the bi-genomic system thus different adaptability in different climatic condition might have compensated by complementary changes is the both genomes [3,4]. In this study we have collected complete mitochondrial genome and occurrence data of one hundred thirteen such dipteran insects from different databases and literature survey. Our understanding of the genetic basis of climatic adaptation in diptera is limited to the basic information on the occurrence location of those species and mito genetic factors underlying changes in conspicuous phenotypes. To examine this hypothesis, we have taken an approach of Nucleotide substitution analysis for 13 protein coding genes of mitochondrial DNA individually and combined by different software for monophyletic group as well as paraphyletic group of dipteran species. Moreover, we have also calculated codon adaptation index for all dipteran mitochondrial protein coding genes. Following this work, we have classified our sample organisms according to their location data from GBIF (https

  5. Intrinsic gain modulation and adaptive neural coding.

    Directory of Open Access Journals (Sweden)

    Sungho Hong

    2008-07-01

    Full Text Available In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate versus current (f-I curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.

  6. Adaptation of Zerotrees Using Signed Binary Digit Representations for 3D Image Coding

    Directory of Open Access Journals (Sweden)

    Mailhes Corinne

    2007-01-01

    Full Text Available Zerotrees of wavelet coefficients have shown a good adaptability for the compression of three-dimensional images. EZW, the original algorithm using zerotree, shows good performance and was successfully adapted to 3D image compression. This paper focuses on the adaptation of EZW for the compression of hyperspectral images. The subordinate pass is suppressed to remove the necessity to keep the significant pixels in memory. To compensate the loss due to this removal, signed binary digit representations are used to increase the efficiency of zerotrees. Contextual arithmetic coding with very limited contexts is also used. Finally, we show that this simplified version of 3D-EZW performs almost as well as the original one.

  7. Adapting the coping in deliberation (CODE) framework: a multi-method approach in the context of familial ovarian cancer risk management.

    Science.gov (United States)

    Witt, Jana; Elwyn, Glyn; Wood, Fiona; Rogers, Mark T; Menon, Usha; Brain, Kate

    2014-11-01

    To test whether the coping in deliberation (CODE) framework can be adapted to a specific preference-sensitive medical decision: risk-reducing bilateral salpingo-oophorectomy (RRSO) in women at increased risk of ovarian cancer. We performed a systematic literature search to identify issues important to women during deliberations about RRSO. Three focus groups with patients (most were pre-menopausal and untested for genetic mutations) and 11 interviews with health professionals were conducted to determine which issues mattered in the UK context. Data were used to adapt the generic CODE framework. The literature search yielded 49 relevant studies, which highlighted various issues and coping options important during deliberations, including mutation status, risks of surgery, family obligations, physician recommendation, peer support and reliable information sources. Consultations with UK stakeholders confirmed most of these factors as pertinent influences on deliberations. Questions in the generic framework were adapted to reflect the issues and coping options identified. The generic CODE framework was readily adapted to a specific preference-sensitive medical decision, showing that deliberations and coping are linked during deliberations about RRSO. Adapted versions of the CODE framework may be used to develop tailored decision support methods and materials in order to improve patient-centred care. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. Efficacy of systematic pelvic lymphadenectomy in endometrial cancer (MRC ASTEC trial): a randomised study.

    Science.gov (United States)

    Kitchener, H; Swart, A M C; Qian, Q; Amos, C; Parmar, M K B

    2009-01-10

    Hysterectomy and bilateral salpingo-oophorectomy (BSO) is the standard surgery for stage I endometrial cancer. Systematic pelvic lymphadenectomy has been used to establish whether there is extra-uterine disease and as a therapeutic procedure; however, randomised trials need to be done to assess therapeutic efficacy. The ASTEC surgical trial investigated whether pelvic lymphadenectomy could improve survival of women with endometrial cancer. From 85 centres in four countries, 1408 women with histologically proven endometrial carcinoma thought preoperatively to be confined to the corpus were randomly allocated by a minimisation method to standard surgery (hysterectomy and BSO, peritoneal washings, and palpation of para-aortic nodes; n=704) or standard surgery plus lymphadenectomy (n=704). The primary outcome measure was overall survival. To control for postsurgical treatment, women with early-stage disease at intermediate or high risk of recurrence were randomised (independent of lymph-node status) into the ASTEC radiotherapy trial. Analysis was by intention to treat. This study is registered, number ISRCTN 16571884. After a median follow-up of 37 months (IQR 24-58), 191 women (88 standard surgery group, 103 lymphadenectomy group) had died, with a hazard ratio (HR) of 1.16 (95% CI 0.87-1.54; p=0.31) in favour of standard surgery and an absolute difference in 5-year overall survival of 1% (95% CI -4 to 6). 251 women died or had recurrent disease (107 standard surgery group, 144 lymphadenectomy group), with an HR of 1.35 (1.06-1.73; p=0.017) in favour of standard surgery and an absolute difference in 5-year recurrence-free survival of 6% (1-12). With adjustment for baseline characteristics and pathology details, the HR for overall survival was 1.04 (0.74-1.45; p=0.83) and for recurrence-free survival was 1.25 (0.93-1.66; p=0.14). Our results show no evidence of benefit in terms of overall or recurrence-free survival for pelvic lymphadenectomy in women with early

  9. Evidence of translation efficiency adaptation of the coding regions of the bacteriophage lambda.

    Science.gov (United States)

    Goz, Eli; Mioduser, Oriah; Diament, Alon; Tuller, Tamir

    2017-08-01

    Deciphering the way gene expression regulatory aspects are encoded in viral genomes is a challenging mission with ramifications related to all biomedical disciplines. Here, we aimed to understand how the evolution shapes the bacteriophage lambda genes by performing a high resolution analysis of ribosomal profiling data and gene expression related synonymous/silent information encoded in bacteriophage coding regions.We demonstrated evidence of selection for distinct compositions of synonymous codons in early and late viral genes related to the adaptation of translation efficiency to different bacteriophage developmental stages. Specifically, we showed that evolution of viral coding regions is driven, among others, by selection for codons with higher decoding rates; during the initial/progressive stages of infection the decoding rates in early/late genes were found to be superior to those in late/early genes, respectively. Moreover, we argued that selection for translation efficiency could be partially explained by adaptation to Escherichia coli tRNA pool and the fact that it can change during the bacteriophage life cycle.An analysis of additional aspects related to the expression of viral genes, such as mRNA folding and more complex/longer regulatory signals in the coding regions, is also reported. The reported conclusions are likely to be relevant also to additional viruses. © The Author 2017. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  10. Adaptive Iterative Soft-Input Soft-Output Parallel Decision-Feedback Detectors for Asynchronous Coded DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Zhang Wei

    2005-01-01

    Full Text Available The optimum and many suboptimum iterative soft-input soft-output (SISO multiuser detectors require a priori information about the multiuser system, such as the users' transmitted signature waveforms, relative delays, as well as the channel impulse response. In this paper, we employ adaptive algorithms in the SISO multiuser detector in order to avoid the need for this a priori information. First, we derive the optimum SISO parallel decision-feedback detector for asynchronous coded DS-CDMA systems. Then, we propose two adaptive versions of this SISO detector, which are based on the normalized least mean square (NLMS and recursive least squares (RLS algorithms. Our SISO adaptive detectors effectively exploit the a priori information of coded symbols, whose soft inputs are obtained from a bank of single-user decoders. Furthermore, we consider how to select practical finite feedforward and feedback filter lengths to obtain a good tradeoff between the performance and computational complexity of the receiver.

  11. Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

    Science.gov (United States)

    Calvo, M.; González-Pinto, S.; Montijano, J. I.

    2008-09-01

    Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with

  12. Validation of MCCI models implemented in ASTEC MEDICIS on OECD CCI-2 and CCI-3 experiments and further consideration on reactor cases

    Energy Technology Data Exchange (ETDEWEB)

    Agethen, K.; Koch, M.K., E-mail: agethen@lee.rub.de, E-mail: koch@lee.rub.de [Ruhr-Universitat Bochum, Energy Systems and Energy Economics, Reactor Simulation and Safety Group, Bochum (Germany)

    2014-07-01

    Within a severe accident in a light water reactor a loss of coolant can result in core melting and vessel failure. Afterwards, molten core material may discharge into the containment cavity and interact with the concrete basemat. Due to concrete erosion gases are released, which lead to exothermic oxidation reactions with the metals in the corium and to formation of combustible mixtures. In this work the MEDICIS module of the Accident Source Term Evaluation Code (ASTEC) is validated on experiments of the OECD CCI programme. The primary focus is set on the CCI-2 experiment with limestone common sand (LCS) concrete, in which nearly homogenous erosion appeared, and the CCI-3 experiment with siliceous concrete, in which increased lateral erosion occurred. These experiments enable the analysis of heat transfer depending on the axial and radial orientation from the interior of the melt to the surrounding surfaces and the impact of top flooding. For the simulation of both tests, two existing models in MEDICIS are used and analysed. Results of simulations show a good agreement of ablation behaviour, layer temperature and energy balance with experimental results. Furthermore the issue of a quasi-steady state in the energy balance for the long term appeared. Finally the basic data are scaled up to a generic reactor scenario, which shows that this quasi-steady state similarly occurred. (author)

  13. Supporting Dynamic Adaptive Streaming over HTTP in Wireless Meshed Networks using Random Linear Network Coding

    DEFF Research Database (Denmark)

    Hundebøll, Martin; Pedersen, Morten Videbæk; Roetter, Daniel Enrique Lucani

    2014-01-01

    This work studies the potential and impact of the FRANC network coding protocol for delivering high quality Dynamic Adaptive Streaming over HTTP (DASH) in wireless networks. Although DASH aims to tailor the video quality rate based on the available throughput to the destination, it relies...

  14. The role of lymphadenectomy in endometrial cancer: was the ASTEC trial doomed by design and are we destined to repeat that mistake?

    Science.gov (United States)

    Naumann, R Wendel

    2012-07-01

    This study examines the design of previous and future trials of lymph node dissection in endometrial cancer. Data from previous trials were used to construct a decision analysis modeling the risk of lymphatic spread and the effects of treatment on patients with endometrial cancer. This model was then applied to previous trials as well as other future trial designs that might be used to address this subject. Comparing the predicted and actual results in the ASTEC trial, the model closely mimics the survival results with and without lymph node dissection for the low and high risk groups. The model suggests a survival difference of less than 2% between the experimental and control arms of the ASTEC trial under all circumstances. Sensitivity analyses reveal that these conclusions are robust. Future trial designs were also modeled with hysterectomy only, hysterectomy with radiation in intermediate risk patients, and staging with radiation only with node positive patients. Predicted outcomes for these approaches yield survival rates of 88%, 90%, and 93% in clinical stage I patients who have a risk of pelvic node involvement of approximately 7%. These estimates were 78%, 82%, and 89% in intermediate risk patients who have a risk of nodal spread of approximately 15%. This model accurately predicts the outcome of previous trials and demonstrates that even if lymph node dissection was therapeutic, these trials would have been negative due to study design. Furthermore, future trial designs that are being considered would need to be conducted in high-intermediate risk patients to detect any difference. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. ComboCoding: Combined intra-/inter-flow network coding for TCP over disruptive MANETs

    Directory of Open Access Journals (Sweden)

    Chien-Chia Chen

    2011-07-01

    Full Text Available TCP over wireless networks is challenging due to random losses and ACK interference. Although network coding schemes have been proposed to improve TCP robustness against extreme random losses, a critical problem still remains of DATA–ACK interference. To address this issue, we use inter-flow coding between DATA and ACK to reduce the number of transmissions among nodes. In addition, we also utilize a “pipeline” random linear coding scheme with adaptive redundancy to overcome high packet loss over unreliable links. The resulting coding scheme, ComboCoding, combines intra-flow and inter-flow coding to provide robust TCP transmission in disruptive wireless networks. The main contributions of our scheme are twofold; the efficient combination of random linear coding and XOR coding on bi-directional streams (DATA and ACK, and the novel redundancy control scheme that adapts to time-varying and space-varying link loss. The adaptive ComboCoding was tested on a variable hop string topology with unstable links and on a multipath MANET with dynamic topology. Simulation results show that TCP with ComboCoding delivers higher throughput than with other coding options in high loss and mobile scenarios, while introducing minimal overhead in normal operation.

  16. Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment

    Directory of Open Access Journals (Sweden)

    Oscar Karnalim

    2017-01-01

    Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.

  17. Context adaptive coding of bi-level images

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2008-01-01

    With the advent of sequential arithmetic coding, the focus of highly efficient lossless data compression is placed on modelling the data. Rissanen's Algorithm Context provided an elegant solution to universal coding with optimal convergence rate. Context based arithmetic coding laid the grounds f...

  18. Tree-based solvers for adaptive mesh refinement code FLASH - I: gravity and optical depths

    Science.gov (United States)

    Wünsch, R.; Walch, S.; Dinnbier, F.; Whitworth, A.

    2018-04-01

    We describe an OctTree algorithm for the MPI parallel, adaptive mesh refinement code FLASH, which can be used to calculate the gas self-gravity, and also the angle-averaged local optical depth, for treating ambient diffuse radiation. The algorithm communicates to the different processors only those parts of the tree that are needed to perform the tree-walk locally. The advantage of this approach is a relatively low memory requirement, important in particular for the optical depth calculation, which needs to process information from many different directions. This feature also enables a general tree-based radiation transport algorithm that will be described in a subsequent paper, and delivers excellent scaling up to at least 1500 cores. Boundary conditions for gravity can be either isolated or periodic, and they can be specified in each direction independently, using a newly developed generalization of the Ewald method. The gravity calculation can be accelerated with the adaptive block update technique by partially re-using the solution from the previous time-step. Comparison with the FLASH internal multigrid gravity solver shows that tree-based methods provide a competitive alternative, particularly for problems with isolated or mixed boundary conditions. We evaluate several multipole acceptance criteria (MACs) and identify a relatively simple approximate partial error MAC which provides high accuracy at low computational cost. The optical depth estimates are found to agree very well with those of the RADMC-3D radiation transport code, with the tree-solver being much faster. Our algorithm is available in the standard release of the FLASH code in version 4.0 and later.

  19. High-throughput sample adaptive offset hardware architecture for high-efficiency video coding

    Science.gov (United States)

    Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin

    2018-03-01

    A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.

  20. Control code for laboratory adaptive optics teaching system

    Science.gov (United States)

    Jin, Moonseob; Luder, Ryan; Sanchez, Lucas; Hart, Michael

    2017-09-01

    By sensing and compensating wavefront aberration, adaptive optics (AO) systems have proven themselves crucial in large astronomical telescopes, retinal imaging, and holographic coherent imaging. Commercial AO systems for laboratory use are now available in the market. One such is the ThorLabs AO kit built around a Boston Micromachines deformable mirror. However, there are limitations in applying these systems to research and pedagogical projects since the software is written with limited flexibility. In this paper, we describe a MATLAB-based software suite to interface with the ThorLabs AO kit by using the MATLAB Engine API and Visual Studio. The software is designed to offer complete access to the wavefront sensor data, through the various levels of processing, to the command signals to the deformable mirror and fast steering mirror. In this way, through a MATLAB GUI, an operator can experiment with every aspect of the AO system's functioning. This is particularly valuable for tests of new control algorithms as well as to support student engagement in an academic environment. We plan to make the code freely available to the community.

  1. Adaptation of OCA-P, a probabilistic fracture-mechanics code, to a personal computer

    International Nuclear Information System (INIS)

    Ball, D.G.; Cheverton, R.D.

    1985-01-01

    The OCA-P probabilistic fracture-mechanics code can now be executed on a personal computer with 512 kilobytes of memory, a math coprocessor, and a hard disk. A user's guide for the particular adaptation has been prepared, and additional importance sampling techniques for OCA-P have been developed that allow the sampling of only the tails of selected distributions. Features have also been added to OCA-P that permit RTNDT to be used as an ''independent'' variable in the calculation of P

  2. Image sensor system with bio-inspired efficient coding and adaptation.

    Science.gov (United States)

    Okuno, Hirotsugu; Yagi, Tetsuya

    2012-08-01

    We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.

  3. Fission products distributions in Candu primary heat transport and Candu containment systems during a severe accident

    International Nuclear Information System (INIS)

    Constantin, Marin; Rizoiu, Andrei

    2005-01-01

    The paper is intended to analyse the distribution of the fission products (FPs) in CANDU Primary Heat Transport (PHT) and CANDU Containment Systems by using the ASTEC code (Accident Source Term Evaluation Code). The complexity of the data required by ASTEC and the complexity both of CANDU PHT and Containment System were strong motivations to begin with a simplified geometry in order to avoid the introducing of unmanageable errors at the level of input deck. Thus only 1/4 of the PHT circuit was simulated and a simplified FPs inventory, some simplifications in the feeders geometry and containment were used. The circuit consists of 95 horizontal fuel channels connected to 95 horizontal out-feeders, then through vertical feeders to the outlet-header (a big pipe that collects the water from feeders); the circuit continues from the outlet-header with a riser and then with the steam generator and a pump. After this pump, the circuit was broken; in this point the FPs are transferred to the containment. The containment model consists of 4 rooms connected between by 6 links. The data related to the nodes' definitions, temperatures and pressure conditions were chosen as possible as real data from CANDU NPP loss of coolant accident sequence. Temperature and pressure conditions in the time of the accident were calculated by the CATHENA code and the source term of FPs introduced into the PHT was estimated by the ORIGEN code. The FPs distribution in the nodes of the circuit and the FPs mass transfer per isotope and chemical species are obtained by using SOPHAEROS module of ASTEC code. The distributions into the containment are obtained by the CPA module of ASTEC code (thermalhydraulics calculations in the containment and FPs aerosol transport). The results consist of mass distributions in the nodes of the circuit and the transferred mass to the containment through the break for different species (FPs and chemical species) and mass distributions in the different parts and

  4. Adaptive Coding and Modulation Experiment With NASA's Space Communication and Navigation Testbed

    Science.gov (United States)

    Downey, Joseph; Mortensen, Dale; Evans, Michael; Briones, Janette; Tollis, Nicholas

    2016-01-01

    National Aeronautics and Space Administration (NASA)'s Space Communication and Navigation Testbed is an advanced integrated communication payload on the International Space Station. This paper presents results from an adaptive coding and modulation (ACM) experiment over S-band using a direct-to-earth link between the SCaN Testbed and the Glenn Research Center. The testing leverages the established Digital Video Broadcasting Second Generation (DVB-S2) standard to provide various modulation and coding options, and uses the Space Data Link Protocol (Consultative Committee for Space Data Systems (CCSDS) standard) for the uplink and downlink data framing. The experiment was conducted in a challenging environment due to the multipath and shadowing caused by the International Space Station structure. Several approaches for improving the ACM system are presented, including predictive and learning techniques to accommodate signal fades. Performance of the system is evaluated as a function of end-to-end system latency (round-trip delay), and compared to the capacity of the link. Finally, improvements over standard NASA waveforms are presented.

  5. Polarization-multiplexed rate-adaptive non-binary-quasi-cyclic-LDPC-coded multilevel modulation with coherent detection for optical transport networks.

    Science.gov (United States)

    Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M

    2010-02-01

    In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.

  6. E.C.-Sarnet. Project presentation (P.P.); E.C.-Sarnet. Project presentation (P.P.)

    Energy Technology Data Exchange (ETDEWEB)

    Micaelli, J.C

    2004-07-01

    In spite of the accomplishments reached in severe accident research, a limited number of specific items remain where research activities are still necessary to reduce further uncertainties that are considered of importance for nuclear reactor safety and to consolidate severe accident management plans. facing and anticipating budget reductions, 49 European R and D organizations, including technical supports of safety authorities, industry, utilities and universities, have decided to join their efforts in S.a.r.n.e.t. in a durable way to resolve outstanding severe safety issues for enhancing the safety of existing and future nuclear power plants(NPP), S.a.r.n.e.t. will: tackle the fragmentation existing in defining or carrying out research programmes; harmonize and improve level 2 probabilistic safety analysis (P.S.A.) methodologies; diffuse the knowledge to associate candidate countries more efficiently; bring together top scientists in severe accident risk assessment. The integral severe accident analysis code A.s.t.e.c. will provide the backbone of the integration. Actions are proposed to integrate in A.s.t.e.c. the current knowledge and all the future knowledge generated within S.a.r.n.e.t.. In addition, the code will be adapted so as to be used for any water-cooled reactor applications in Europe. I.R.S.N. and G.R.S. will do their best to provide the necessary capacity for maintenance, training and developments. The network management will coordinate the knowledge generation through joint projects of research activities, monitor its integration in A.s.t.e.c., make sure that access rights are correctly implemented, disseminate appropriate information using electronic communication links, preserve the knowledge in scientific databases, and identify the missing knowledge. These actions will be decided and controlled by a Governing board assisted by appropriate advisory capacities. Most organisations involved will contribute to the diffusion of the knowledge by

  7. E.C.-Sarnet. Project presentation (P.P.)

    International Nuclear Information System (INIS)

    Micaelli, J.C.

    2004-01-01

    In spite of the accomplishments reached in severe accident research, a limited number of specific items remain where research activities are still necessary to reduce further uncertainties that are considered of importance for nuclear reactor safety and to consolidate severe accident management plans. facing and anticipating budget reductions, 49 European R and D organizations, including technical supports of safety authorities, industry, utilities and universities, have decided to join their efforts in S.a.r.n.e.t. in a durable way to resolve outstanding severe safety issues for enhancing the safety of existing and future nuclear power plants(NPP), S.a.r.n.e.t. will: tackle the fragmentation existing in defining or carrying out research programmes; harmonize and improve level 2 probabilistic safety analysis (P.S.A.) methodologies; diffuse the knowledge to associate candidate countries more efficiently; bring together top scientists in severe accident risk assessment. The integral severe accident analysis code A.s.t.e.c. will provide the backbone of the integration. Actions are proposed to integrate in A.s.t.e.c. the current knowledge and all the future knowledge generated within S.a.r.n.e.t.. In addition, the code will be adapted so as to be used for any water-cooled reactor applications in Europe. I.R.S.N. and G.R.S. will do their best to provide the necessary capacity for maintenance, training and developments. The network management will coordinate the knowledge generation through joint projects of research activities, monitor its integration in A.s.t.e.c., make sure that access rights are correctly implemented, disseminate appropriate information using electronic communication links, preserve the knowledge in scientific databases, and identify the missing knowledge. These actions will be decided and controlled by a Governing board assisted by appropriate advisory capacities. Most organisations involved will contribute to the diffusion of the knowledge by

  8. Motion-adaptive intraframe transform coding of video signals

    NARCIS (Netherlands)

    With, de P.H.N.

    1989-01-01

    Spatial transform coding has been widely applied for image compression because of its high coding efficiency. However, in many intraframe systems, in which every TV frame is independently processed, coding of moving objects in the case of interlaced input signals is not addressed. In this paper, we

  9. An adaptive mode-driven spatiotemporal motion vector prediction for wavelet video coding

    Science.gov (United States)

    Zhao, Fan; Liu, Guizhong; Qi, Yong

    2010-07-01

    The three-dimensional subband/wavelet codecs use 5/3 filters rather than Haar filters for the motion compensation temporal filtering (MCTF) to improve the coding gain. In order to curb the increased motion vector rate, an adaptive motion mode driven spatiotemporal motion vector prediction (AMDST-MVP) scheme is proposed. First, by making use of the direction histograms of four motion vector fields resulting from the initial spatial motion vector prediction (SMVP), the motion mode of the current GOP is determined according to whether the fast or complex motion exists in the current GOP. Then the GOP-level MVP scheme is thereby determined by either the S-MVP or the AMDST-MVP, namely, AMDST-MVP is the combination of S-MVP and temporal-MVP (T-MVP). If the latter is adopted, the motion vector difference (MVD) between the neighboring MV fields and the S-MVP resulting MV of the current block is employed to decide whether or not the MV of co-located block in the previous frame is used for prediction the current block. Experimental results show that AMDST-MVP not only can improve the coding efficiency but also reduce the number of computation complexity.

  10. Adaptive transmission based on multi-relay selection and rate-compatible LDPC codes

    Science.gov (United States)

    Su, Hualing; He, Yucheng; Zhou, Lin

    2017-08-01

    In order to adapt to the dynamical changeable channel condition and improve the transmissive reliability of the system, a cooperation system of rate-compatible low density parity check (RC-LDPC) codes combining with multi-relay selection protocol is proposed. In traditional relay selection protocol, only the channel state information (CSI) of source-relay and the CSI of relay-destination has been considered. The multi-relay selection protocol proposed by this paper takes the CSI between relays into extra account in order to obtain more chances of collabration. Additionally, the idea of hybrid automatic request retransmission (HARQ) and rate-compatible are introduced. Simulation results show that the transmissive reliability of the system can be significantly improved by the proposed protocol.

  11. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.

    Directory of Open Access Journals (Sweden)

    Laureline Logiaco

    2015-08-01

    Full Text Available The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators.

  12. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.

    Science.gov (United States)

    Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo

    2015-08-01

    The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators.

  13. Entropy Coding in HEVC

    OpenAIRE

    Sze, Vivienne; Marpe, Detlev

    2014-01-01

    Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...

  14. Rate Adaptive OFDMA Communication Systems

    International Nuclear Information System (INIS)

    Abdelhakim, M.M.M.

    2009-01-01

    Due to the varying nature of the wireless channels, adapting the transmission parameters, such as code rate, modulation order and power, in response to the channel variations provides a significant improvement in the system performance. In the OFDM systems, Per-Frame adaptation (PFA) can be employed where the transmission variables are fixed over a given frame and may change from one frame to the other. Subband (tile) loading offers more degrees of adaptation such that each group of carriers (subband) uses the same transmission parameters and different subbands may use different parameters. Changing the code rate for each tile in the same frame, results in transmitting multiple codewords (MCWs) for a single frame. In this thesis a scheme is proposed for adaptively changing the code rate of coded OFDMA systems via changing the puncturing rate within a single codeword (SCW). In the proposed structure, the data is encoded with the lowest available code rate then it is divided among the different tiles where it is punctured adaptively based on some measure of the channel quality for each tile. The proposed scheme is compared against using multiple codewords (MCWs) where the different code rates for the tiles are obtained using separate encoding processes. For bit interleaved coded modulation architecture two novel interleaving methods are proposed, namely the puncturing dependant interleaver (PDI) and interleaved puncturing (IntP), which provide larger interleaving depth. In the PDI method the coded bits with the same rate over different tiles are grouped for interleaving. In IntP structure the interleaving is performed prior to puncturing. The performance of the adaptive puncturing technique is investigated under constant bit rate constraint and variable bit rate. Two different adaptive modulation and coding (AMC) selection methods are examined for variable bit rate adaptive system. The first is a recursive scheme that operates directly on the SNR whereas the second

  15. Enhanced attention amplifies face adaptation.

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Evangelista, Emma; Ewing, Louise; Peters, Marianne; Taylor, Libby

    2011-08-15

    Perceptual adaptation not only produces striking perceptual aftereffects, but also enhances coding efficiency and discrimination by calibrating coding mechanisms to prevailing inputs. Attention to simple stimuli increases adaptation, potentially enhancing its functional benefits. Here we show that attention also increases adaptation to faces. In Experiment 1, face identity aftereffects increased when attention to adapting faces was increased using a change detection task. In Experiment 2, figural (distortion) face aftereffects increased when attention was increased using a snap game (detecting immediate repeats) during adaptation. Both were large effects. Contributions of low-level adaptation were reduced using free viewing (both experiments) and a size change between adapt and test faces (Experiment 2). We suggest that attention may enhance adaptation throughout the entire cortical visual pathway, with functional benefits well beyond the immediate advantages of selective processing of potentially important stimuli. These results highlight the potential to facilitate adaptive updating of face-coding mechanisms by strategic deployment of attentional resources. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Progress of IRSN R&D on ITER Safety Assessment

    Science.gov (United States)

    Van Dorsselaere, J. P.; Perrault, D.; Barrachin, M.; Bentaib, A.; Gensdarmes, F.; Haeck, W.; Pouvreau, S.; Salat, E.; Seropian, C.; Vendel, J.

    2012-08-01

    The French "Institut de Radioprotection et de Sûreté Nucléaire" (IRSN), in support to the French "Autorité de Sûreté Nucléaire", is analysing the safety of ITER fusion installation on the basis of the ITER operator's safety file. IRSN set up a multi-year R&D program in 2007 to support this safety assessment process. Priority has been given to four technical issues and the main outcomes of the work done in 2010 and 2011 are summarized in this paper: for simulation of accident scenarios in the vacuum vessel, adaptation of the ASTEC system code; for risk of explosion of gas-dust mixtures in the vacuum vessel, adaptation of the TONUS-CFD code for gas distribution, development of DUST code for dust transport, and preparation of IRSN experiments on gas inerting, dust mobilization, and hydrogen-dust mixtures explosion; for evaluation of the efficiency of the detritiation systems, thermo-chemical calculations of tritium speciation during transport in the gas phase and preparation of future experiments to evaluate the most influent factors on detritiation; for material neutron activation, adaptation of the VESTA Monte Carlo depletion code. The first results of these tasks have been used in 2011 for the analysis of the ITER safety file. In the near future, this R&D global programme may be reoriented to account for the feedback of the latter analysis or for new knowledge.

  17. Research on pre-processing of QR Code

    Science.gov (United States)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  18. Variable Rate, Adaptive Transform Tree Coding Of Images

    Science.gov (United States)

    Pearlman, William A.

    1988-10-01

    A tree code, asymptotically optimal for stationary Gaussian sources and squared error distortion [2], is used to encode transforms of image sub-blocks. The variance spectrum of each sub-block is estimated and specified uniquely by a set of one-dimensional auto-regressive parameters. The expected distortion is set to a constant for each block and the rate is allowed to vary to meet the given level of distortion. Since the spectrum and rate are different for every block, the code tree differs for every block. Coding simulations for target block distortion of 15 and average block rate of 0.99 bits per pel (bpp) show that very good results can be obtained at high search intensities at the expense of high computational complexity. The results at the higher search intensities outperform a parallel simulation with quantization replacing tree coding. Comparative coding simulations also show that the reproduced image with variable block rate and average rate of 0.99 bpp has 2.5 dB less distortion than a similarly reproduced image with a constant block rate equal to 1.0 bpp.

  19. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    Science.gov (United States)

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  20. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Dongyul Lee

    2014-01-01

    Full Text Available The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC with adaptive modulation and coding (AMC provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  1. Cooperative and Adaptive Network Coding for Gradient Based Routing in Wireless Sensor Networks with Multiple Sinks

    Directory of Open Access Journals (Sweden)

    M. E. Migabo

    2017-01-01

    Full Text Available Despite its low computational cost, the Gradient Based Routing (GBR broadcast of interest messages in Wireless Sensor Networks (WSNs causes significant packets duplications and unnecessary packets transmissions. This results in energy wastage, traffic load imbalance, high network traffic, and low throughput. Thanks to the emergence of fast and powerful processors, the development of efficient network coding strategies is expected to enable efficient packets aggregations and reduce packets retransmissions. For multiple sinks WSNs, the challenge consists of efficiently selecting a suitable network coding scheme. This article proposes a Cooperative and Adaptive Network Coding for GBR (CoAdNC-GBR technique which considers the network density as dynamically defined by the average number of neighbouring nodes, to efficiently aggregate interest messages. The aggregation is performed by means of linear combinations of random coefficients of a finite Galois Field of variable size GF(2S at each node and the decoding is performed by means of Gaussian elimination. The obtained results reveal that, by exploiting the cooperation of the multiple sinks, the CoAdNC-GBR not only improves the transmission reliability of links and lowers the number of transmissions and the propagation latency, but also enhances the energy efficiency of the network when compared to the GBR-network coding (GBR-NC techniques.

  2. WHITE DWARF MERGERS ON ADAPTIVE MESHES. I. METHODOLOGY AND CODE VERIFICATION

    Energy Technology Data Exchange (ETDEWEB)

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas [Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY, 11794-3800 (United States); Almgren, Ann S.; Zhang, Weiqun [Center for Computational Sciences and Engineering, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2016-03-10

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  3. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    Science.gov (United States)

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-08-12

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  4. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    Science.gov (United States)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  5. Adaptive distributed video coding with correlation estimation using expectation propagation

    Science.gov (United States)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  6. Analysis of top flooding during molten corium concrete interaction (MCCI) with the code MEDICIS using a simplified approach for the combined effect of crust formation and boiling

    International Nuclear Information System (INIS)

    Spengler, C.

    2012-01-01

    The objective of this work is to provide adequate models in the code MEDICIS for the molten corium concrete interaction (MCCI) phase in a severe accident. Here, the multidimensional distribution of heat fluxes from the molten pool of corium to the sidewall and bottom wall concrete structures in the reactor pit and to the top surface is a persistent subject of international research activities on MCCI. In recent experi-ments with internally heated oxide melts it was observed that the erosion progress may be anisotropic - with an apparent preference of the sidewall compared to the bottom wall - or isotropic, in dependence of the type of concrete with which the cori-um interacts. The lumped parameter code MEDICIS, which is part of the severe accident codes ASTEC and COCOSYS - developed and used at IRSN/GRS respectively GRS for the latter one -, is dedicated to simulate the phenomenology during MCCI. In this work a simplified modelling in MEDICIS is tested to account for the observed ablation behaviour during MCCI, with focus on the heat transfer to the top surface under flooded conditions. This approach is assessed by calculations for selected MCCI experiments involving the top flooding of the melt. (orig.)

  7. Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes.

    Science.gov (United States)

    Chien, Tsair-Wei; Lin, Weir-Sen

    2016-03-02

    The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients' true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access.

  8. An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks.

    Science.gov (United States)

    Yu, Shidi; Liu, Xiao; Liu, Anfeng; Xiong, Naixue; Cai, Zhiping; Wang, Tian

    2018-05-10

    Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed

  9. An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shidi Yu

    2018-05-01

    Full Text Available Due to the Software Defined Network (SDN technology, Wireless Sensor Networks (WSNs are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1 with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2 As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3 The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that

  10. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    Science.gov (United States)

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  11. Content Adaptive Lagrange Multiplier Selection for Rate-Distortion Optimization in 3-D Wavelet-Based Scalable Video Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2018-03-01

    Full Text Available Rate-distortion optimization (RDO plays an essential role in substantially enhancing the coding efficiency. Currently, rate-distortion optimized mode decision is widely used in scalable video coding (SVC. Among all the possible coding modes, it aims to select the one which has the best trade-off between bitrate and compression distortion. Specifically, this tradeoff is tuned through the choice of the Lagrange multiplier. Despite the prevalence of conventional method for Lagrange multiplier selection in hybrid video coding, the underlying formulation is not applicable to 3-D wavelet-based SVC where the explicit values of the quantization step are not available, with on consideration of the content features of input signal. In this paper, an efficient content adaptive Lagrange multiplier selection algorithm is proposed in the context of RDO for 3-D wavelet-based SVC targeting quality scalability. Our contributions are two-fold. First, we introduce a novel weighting method, which takes account of the mutual information, gradient per pixel, and texture homogeneity to measure the temporal subband characteristics after applying the motion-compensated temporal filtering (MCTF technique. Second, based on the proposed subband weighting factor model, we derive the optimal Lagrange multiplier. Experimental results demonstrate that the proposed algorithm enables more satisfactory video quality with negligible additional computational complexity.

  12. Adaptive Combined Source and Channel Decoding with Modulation ...

    African Journals Online (AJOL)

    In this paper, an adaptive system employing combined source and channel decoding with modulation is proposed for slow Rayleigh fading channels. Huffman code is used as the source code and Convolutional code is used for error control. The adaptive scheme employs a family of Convolutional codes of different rates ...

  13. Adapting Canada's northern infrastructure to climate change: the role of codes and standards

    International Nuclear Information System (INIS)

    Steenhof, P.

    2009-01-01

    This report provides the results of a research project that investigated the use of codes and standards in terms of their potential for fostering adaptation to the future impacts of climate change on built infrastructure in Canada's north. This involved a literature review, undertaking key informant interviews, and a workshop where key stakeholders came together to dialogue on the challenges facing built infrastructure in the north as a result of climate change and the role of codes and standards to help mitigate climate change risk. In this article, attention is given to the topic area of climate data and information requirements related to climate and climate change. This was an important focal area that was identified through this broader research effort since adequate data is essential in allowing codes and standards to meet their ultimate policy objective. A number of priorities have been identified specific to data and information needs in the context of the research topic investigated: There is a need to include northerners in developing the climate and permafrost data required for codes and standards so that these reflect the unique geographical, economic, and cultural realities and variability of the north; Efforts should be undertaken to realign climate design values so that they reflect both present and future risks; There is a need for better information on the rate and extent of permafrost degradation in the north; and, There is a need to improve monitoring of the rate of climate change in the Arctic. (author)

  14. Adaptive Mesh Refinement in CTH

    International Nuclear Information System (INIS)

    Crawford, David

    1999-01-01

    This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems

  15. Inclusion of the fitness sharing technique in an evolutionary algorithm to analyze the fitness landscape of the genetic code adaptability.

    Science.gov (United States)

    Santos, José; Monteagudo, Ángel

    2017-03-27

    The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.

  16. Coding for Electronic Mail

    Science.gov (United States)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  17. Adaptability of supercomputers to nuclear computations

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Ishiguro, Misako; Matsuura, Toshihiko.

    1983-01-01

    Recently in the field of scientific and technical calculation, the usefulness of supercomputers represented by CRAY-1 has been recognized, and they are utilized in various countries. The rapid computation of supercomputers is based on the function of vector computation. The authors investigated the adaptability to vector computation of about 40 typical atomic energy codes for the past six years. Based on the results of investigation, the adaptability of the function of vector computation that supercomputers have to atomic energy codes, the problem regarding the utilization and the future prospect are explained. The adaptability of individual calculation codes to vector computation is largely dependent on the algorithm and program structure used for the codes. The change to high speed by pipeline vector system, the investigation in the Japan Atomic Energy Research Institute and the results, and the examples of expressing the codes for atomic energy, environmental safety and nuclear fusion by vector are reported. The magnification of speed up for 40 examples was from 1.5 to 9.0. It can be said that the adaptability of supercomputers to atomic energy codes is fairly good. (Kako, I.)

  18. The LEONAR code: a new tool for PSA Level 2 analyses

    International Nuclear Information System (INIS)

    Tourniaire, B; Spindler, B.; Ratel, G.; Seiler, J.M.; Iooss, B.; Marques, M.; Gaudier, F.; Greffier, G.

    2011-01-01

    The LEONAR code, complementary to integral codes such as MAAP or ASTEC, is a new severe accident simulation tool which can calculate easily 1000 late phase reactor situations within a few hours and provide a statistical evaluation of the situations. LEONAR can be used for the analysis of the impact on the failure probabilities of specific Severe Accident Management measures (for instance: water injection) or design modifications (for instance: pressure vessel flooding or dedicated reactor pit flooding), or to focus the research effort on key phenomena. The starting conditions for LEONAR are a set of core melting situations that are separately calculated from a core degradation code (such as MAAP, which is used by EDF). LEONAR describes the core melt evolution after flooding in the core, the corium relocation in the lower head (under dry and wet conditions), the evolution of corium in the lower head including the effect of flooding, the vessel failure, corium relocation in the reactor cavity, interaction between corium and basemat concrete, possible corium spreading in the neighbour rooms, on the containment floor. Scenario events as well as specific physical model parameters are characterised by a probability density distribution. The probabilistic evaluation is performed by URANIE that is coupled to the physical calculations. The calculation results are treated in a statistical way in order to provide easily usable information. This tool can be used to identify the main parameters that influence corium coolability for severe accident late phases. It is aimed to replace efficiently PIRT exercises. An important impact of such a tool is that it can be used to make a demonstration that the probability of basemat failure can be significantly reduced by coupling a number of separate severe accident management measures or design modifications despite each separate measure is not sufficient by itself to avoid the failure. (authors)

  19. Coding and decoding with adapting neurons: a population approach to the peri-stimulus time histogram.

    Science.gov (United States)

    Naud, Richard; Gerstner, Wulfram

    2012-01-01

    The response of a neuron to a time-dependent stimulus, as measured in a Peri-Stimulus-Time-Histogram (PSTH), exhibits an intricate temporal structure that reflects potential temporal coding principles. Here we analyze the encoding and decoding of PSTHs for spiking neurons with arbitrary refractoriness and adaptation. As a modeling framework, we use the spike response model, also known as the generalized linear neuron model. Because of refractoriness, the effect of the most recent spike on the spiking probability a few milliseconds later is very strong. The influence of the last spike needs therefore to be described with high precision, while the rest of the neuronal spiking history merely introduces an average self-inhibition or adaptation that depends on the expected number of past spikes but not on the exact spike timings. Based on these insights, we derive a 'quasi-renewal equation' which is shown to yield an excellent description of the firing rate of adapting neurons. We explore the domain of validity of the quasi-renewal equation and compare it with other rate equations for populations of spiking neurons. The problem of decoding the stimulus from the population response (or PSTH) is addressed analogously. We find that for small levels of activity and weak adaptation, a simple accumulator of the past activity is sufficient to decode the original input, but when refractory effects become large decoding becomes a non-linear function of the past activity. The results presented here can be applied to the mean-field analysis of coupled neuron networks, but also to arbitrary point processes with negative self-interaction.

  20. Adaptation and perceptual norms

    Science.gov (United States)

    Webster, Michael A.; Yasuda, Maiko; Haber, Sara; Leonard, Deanne; Ballardini, Nicole

    2007-02-01

    We used adaptation to examine the relationship between perceptual norms--the stimuli observers describe as psychologically neutral, and response norms--the stimulus levels that leave visual sensitivity in a neutral or balanced state. Adapting to stimuli on opposite sides of a neutral point (e.g. redder or greener than white) biases appearance in opposite ways. Thus the adapting stimulus can be titrated to find the unique adapting level that does not bias appearance. We compared these response norms to subjectively defined neutral points both within the same observer (at different retinal eccentricities) and between observers. These comparisons were made for visual judgments of color, image focus, and human faces, stimuli that are very different and may depend on very different levels of processing, yet which share the property that for each there is a well defined and perceptually salient norm. In each case the adaptation aftereffects were consistent with an underlying sensitivity basis for the perceptual norm. Specifically, response norms were similar to and thus covaried with the perceptual norm, and under common adaptation differences between subjectively defined norms were reduced. These results are consistent with models of norm-based codes and suggest that these codes underlie an important link between visual coding and visual experience.

  1. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    Science.gov (United States)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  2. Enhancement of combined heat and power economic dispatch using self adaptive real-coded genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Subbaraj, P. [Kalasalingam University, Srivilliputhur, Tamilnadu 626 190 (India); Rengaraj, R. [Electrical and Electronics Engineering, S.S.N. College of Engineering, Old Mahabalipuram Road, Thirupporur (T.K), Kalavakkam, Kancheepuram (Dist.) 603 110, Tamilnadu (India); Salivahanan, S. [S.S.N. College of Engineering, Old Mahabalipuram Road, Thirupporur (T.K), Kalavakkam, Kancheepuram (Dist.) 603 110, Tamilnadu (India)

    2009-06-15

    In this paper, a self adaptive real-coded genetic algorithm (SARGA) is implemented to solve the combined heat and power economic dispatch (CHPED) problem. The self adaptation is achieved by means of tournament selection along with simulated binary crossover (SBX). The selection process has a powerful exploration capability by creating tournaments between two solutions. The better solution is chosen and placed in the mating pool leading to better convergence and reduced computational burden. The SARGA integrates penalty parameterless constraint handling strategy and simultaneously handles equality and inequality constraints. The population diversity is introduced by making use of distribution index in SBX operator to create a better offspring. This leads to a high diversity in population which can increase the probability towards the global optimum and prevent premature convergence. The SARGA is applied to solve CHPED problem with bounded feasible operating region which has large number of local minima. The numerical results demonstrate that the proposed method can find a solution towards the global optimum and compares favourably with other recent methods in terms of solution quality, handling constraints and computation time. (author)

  3. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, B.L. [Oak Ridge National Lab., TN (United States); Sartori, E. [OCDE/OECD NEA Data Bank, Issy-les-Moulineaux (France); Viedma, L.G. de [Consejo de Seguridad Nuclear, Madrid (Spain)

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  4. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    International Nuclear Information System (INIS)

    Kirk, B.L.; Sartori, E.; Viedma, L.G. de

    1997-01-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee's Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community's computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management

  5. Adaptable Value-Set Analysis for Low-Level Code

    OpenAIRE

    Brauer, Jörg; Hansen, René Rydhof; Kowalewski, Stefan; Larsen, Kim G.; Olesen, Mads Chr.

    2012-01-01

    This paper presents a framework for binary code analysis that uses only SAT-based algorithms. Within the framework, incremental SAT solving is used to perform a form of weakly relational value-set analysis in a novel way, connecting the expressiveness of the value sets to computational complexity. Another key feature of our framework is that it translates the semantics of binary code into an intermediate representation. This allows for a straightforward translation of the program semantics in...

  6. High Order Modulation Protograph Codes

    Science.gov (United States)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  7. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    International Nuclear Information System (INIS)

    Zhao Gongbo; Koyama, Kazuya; Li Baojiu

    2011-01-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k∼20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  8. Isotope distributions in primary heat transport and containment systems during a severe accident in CANDU type reactor

    International Nuclear Information System (INIS)

    Constantin, M.

    2005-01-01

    The paper is intended to analyse the distribution of the fission products (FPs) in CANDU Primary Heat Transport (PHT) and CANDU Containment Systems by using the ASTEC code. The complexity of the data required by ASTEC and the complexity both of CANDU PHT and Containment System were strong motivation to begin with a simplified model. The data related to the nodes' definitions, temperatures and pressure conditions were chosen as possible as real data from CANDU loss of coolant accident sequence (CATHENA code results). The source term of FPs introduced into the PHT was estimated by ORIGEN code. The FPs distribution in the nodes of the circuit and the FPs mass transfer per isotope and chemical species were obtained by using SOPHAEROS module. The distributions within the containment are obtained by the CPA module (thermalhydraulic calculations in the containment and FPs aerosol transport). The results consist of mass distributions in the nodes of the circuit and the transferred mass to the containment through the break for different species (FPs and chemical species) and mass distributions in the different parts of the containment and different hosts. (authors)

  9. Analysis and Design of Adaptive OCDMA Passive Optical Networks

    Science.gov (United States)

    Hadi, Mohammad; Pakravan, Mohammad Reza

    2017-07-01

    OCDMA systems can support multiple classes of service by differentiating code parameters, power level and diversity order. In this paper, we analyze BER performance of a multi-class 1D/2D OCDMA system and propose a new approximation method that can be used to generate accurate estimation of system BER using a simple mathematical form. The proposed approximation provides insight into proper system level analysis, system level design and sensitivity of system performance to the factors such as code parameters, power level and diversity order. Considering code design, code cardinality and system performance constraints, two design problems are defined and their optimal solutions are provided. We then propose an adaptive OCDMA-PON that adaptively shares unused resources of inactive users among active ones to improve upstream system performance. Using the approximated BER expression and defined design problems, two adaptive code allocation algorithms for the adaptive OCDMA-PON are presented and their performances are evaluated by simulation. Simulation results show that the adaptive code allocation algorithms can increase average transmission rate or decrease average optical power consumption of ONUs for dynamic traffic patterns. According to the simulation results, for an adaptive OCDMA-PON with BER value of 1e-7 and user activity probability of 0.5, transmission rate (optical power consumption) can be increased (decreased) by a factor of 2.25 (0.27) compared to fixed code assignment.

  10. An Adaptive Coding Scheme For Effective Bandwidth And Power ...

    African Journals Online (AJOL)

    Codes for communication channels are in most cases chosen on the basis of the signal to noise ratio expected on a given transmission channel. The worst possible noise condition is normally assumed in the choice of appropriate codes such that a specified minimum error shall result during transmission on the channel.

  11. Thermal-hydraulics investigations for the Liquid Lead-Bismuth Target of the SINQ spallation source

    International Nuclear Information System (INIS)

    Sigg, B.; Dury, T.; Hudina, M.; Smith, B.

    1991-01-01

    The paper contains a discussion of the thermal-hydraulic problems of the target which require detailed analysis by means of a two- or three-dimensional space- and in part also time-dependent fluid dynamics code. There follows a short description of the general-purpose code ASTEC, which is being used for these investigations, and examples of the target modelling, including results. The final part of the paper is devoted to a short discussion of experiments against which this application of the code has to be validated. (author)

  12. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    Science.gov (United States)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  13. MARS Code in Linux Environment

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  14. MARS Code in Linux Environment

    International Nuclear Information System (INIS)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong

    2005-01-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  15. Application of the integral code MELCOR for German NPPs and use within accident management and PSA projects

    International Nuclear Information System (INIS)

    Sonnenkalb, Martin

    2006-01-01

    The paper summarizes the application of MELCOR to German NPPS with PWR and BWR. A development of different code systems like ATHLET/ATHLET-CD, COCOSYS and ASTEC is done as well at GRS but it is not discussed in this paper. GRS has been using MELCOR since 1990 for real plant calculations. The results of MELCOR analyses are used mainly in PSA level 2 studies and in Accident Management projects for both types of NPPs. MELCOR has been a very useful and robust tool for these analyses. The calculations performed within the PSA level 2 studies for both types of German NPPs have shown that typical severe accident scenarios are characterized by several phases and that the consideration of plant specifics are important not only for realistic source term calculations. An overview of typically severe accident phases together with main accident management measures installed in German NPPs is presented in the paper. Several severe accident sequences have been calculated for both reactor types and some detailed nodalisation studies and code to code comparisons have been prepared in the past, to prove the developed core, reactor circuit and containment/building nodalisation schemes. Together with the compilation of the MELCOR data set, the qualification of the nodalisation schemes has been pursued with comparative calculations with detailed GRS codes for selected phases of severe accidents. The results of these comparative analyses showed in most of the areas a good agreement of essential parameters and of the general description of the plant behaviour during the accident progression. The in general detail of the German plant nodalisation schemes developed for MELCOR contributes significantly to this good agreement between integral and detailed code results. The implementation of MELCOR into the GRS simulator ATLAS was very important for the assessment of the results, not only due to the great detail of the nodalisation schemes used. It is used for training of severe accident

  16. Visual Coding of Human Bodies: Perceptual Aftereffects Reveal Norm-Based, Opponent Coding of Body Identity

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J.

    2013-01-01

    Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this…

  17. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    Science.gov (United States)

    Zhao, Gong-Bo; Li, Baojiu; Koyama, Kazuya

    2011-02-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu [Phys. Rev. DPRVDAQ1550-7998 78, 123524 (2008)10.1103/PhysRevD.78.123524] and Schmidt [Phys. Rev. DPRVDAQ1550-7998 79, 083518 (2009)10.1103/PhysRevD.79.083518], and extend the resolution up to k˜20h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  18. Diffusive deposition of aerosols in Phebus containment during FPT-2 test

    International Nuclear Information System (INIS)

    Kontautas, A.; Urbonavicius, E.

    2012-01-01

    At present the lumped-parameter codes is the main tool to investigate the complex response of the containment of Nuclear Power Plant in case of an accident. Continuous development and validation of the codes is required to perform realistic investigation of the processes that determine the possible source term of radioactive products to the environment. Validation of the codes is based on the comparison of the calculated results with the measurements performed in experimental facilities. The most extensive experimental program to investigate fission product release from the molten fuel, transport through the cooling circuit and deposition in the containment is performed in PHEBUS test facility. Test FPT-2 performed in this facility is considered for analysis of processes taking place in containment. Earlier performed investigations using COCOSYS code showed that the code could be successfully used for analysis of thermal-hydraulic processes and deposition of aerosols, but there was also noticed that diffusive deposition on the vertical walls does not fit well with the measured results. In the CPA module of ASTEC code there is implemented different model for diffusive deposition, therefore the PHEBUS containment model was transferred from COCOSYS code to ASTEC-CPA to investigate the influence of the diffusive deposition modelling. Analysis was performed using PHEBUS containment model of 16 nodes. The calculated thermal-hydraulic parameters are in good agreement with measured results, which gives basis for realistic simulation of aerosol transport and deposition processes. Performed investigations showed that diffusive deposition model has influence on the aerosol deposition distribution on different surfaces in the test facility. (authors)

  19. Multiplexed Spike Coding and Adaptation in the Thalamus

    Directory of Open Access Journals (Sweden)

    Rebecca A. Mease

    2017-05-01

    Full Text Available High-frequency “burst” clusters of spikes are a generic output pattern of many neurons. While bursting is a ubiquitous computational feature of different nervous systems across animal species, the encoding of synaptic inputs by bursts is not well understood. We find that bursting neurons in the rodent thalamus employ “multiplexing” to differentially encode low- and high-frequency stimulus features associated with either T-type calcium “low-threshold” or fast sodium spiking events, respectively, and these events adapt differently. Thus, thalamic bursts encode disparate information in three channels: (1 burst size, (2 burst onset time, and (3 precise spike timing within bursts. Strikingly, this latter “intraburst” encoding channel shows millisecond-level feature selectivity and adapts across statistical contexts to maintain stable information encoded per spike. Consequently, calcium events both encode low-frequency stimuli and, in parallel, gate a transient window for high-frequency, adaptive stimulus encoding by sodium spike timing, allowing bursts to efficiently convey fine-scale temporal information.

  20. A Spanish version for the new ERA-EDTA coding system for primary renal disease

    Directory of Open Access Journals (Sweden)

    Óscar Zurriaga

    2015-07-01

    Conclusions: Translation and adaptation into Spanish represent an improvement that will help to introduce and use the new coding system for PKD, as it can help reducing the time devoted to coding and also the period of adaptation of health workers to the new codes.

  1. RBMK-LOCA-Analyses with the ATHLET-Code

    Energy Technology Data Exchange (ETDEWEB)

    Petry, A. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH Kurfuerstendamm, Berlin (Germany); Domoradov, A.; Finjakin, A. [Research and Development Institute of Power Engineering, Moscow (Russian Federation)

    1995-09-01

    The scientific technical cooperation between Germany and Russia includes the area of adaptation of several German codes for the Russian-designed RBMK-reactor. One point of this cooperation is the adaptation of the Thermal-Hydraulic code ATHLET (Analyses of the Thermal-Hydraulics of LEaks and Transients), for RBMK-specific safety problems. This paper contains a short description of a RBMK-1000 reactor circuit. Furthermore, the main features of the thermal-hydraulic code ATHLET are presented. The main assumptions for the ATHLET-RBMK model are discussed. As an example for the application, the results of test calculations concerning a guillotine type rupture of a distribution group header are presented and discussed, and the general analysis conditions are described. A comparison with corresponding RELAP-calculations is given. This paper gives an overview on some problems posed and experience by application of Western best-estimate codes for RBMK-calculations.

  2. Anti-voice adaptation suggests prototype-based coding of voice identity

    Directory of Open Access Journals (Sweden)

    Marianne eLatinus

    2011-07-01

    Full Text Available We used perceptual aftereffects induced by adaptation with anti-voice stimuli to investigate voice identity representations. Participants learned a set of voices then were tested on a voice identification task with vowel stimuli morphed between identities, after different conditions of adaptation. In Experiment 1, participants chose the identity opposite to the adapting anti-voice significantly more often than the other two identities (e.g., after being adapted to anti-A, they identified the average voice as A. In Experiment 2, participants showed a bias for identities opposite to the adaptor specifically for anti-voice, but not for non anti-voice adaptors. These results are strikingly similar to adaptation aftereffects observed for facial identity. They are compatible with a representation of individual voice identities in a multidimensional perceptual voice space referenced on a voice prototype.

  3. Participation in EU SARNET2 Project for an Enhancement of Severe Accident Evaluation Capability in Domestic NPPs

    International Nuclear Information System (INIS)

    Ahn, Kwang Il; Hong, S. W.; Kim, S. B.; Park, J. H.; Song, Y. M.

    2010-07-01

    The following results were obtained from the first year SARNET2 research activities. - WP4-2 session : acquisition of ASTEC 2.0 execution version, user manual and modeling/technical report for main six modules, code familiarization via ASTEC 2.0 code structure analysis and sample calculations. - WP6-4 session : preliminary calculations for APR1400 MCCI using MELCOR code. - WP7-1 session : thermal-hydraulic review of particle solidification model for steam explosion, suggestion of further activities for modeling corium material effect, review of physio-chemical particles and its applicability for modeling corium material effect. - WP7-2 session: review and benchmark analysis of ENACCEF experiments by participating in Task 3 'Combustion benchmark', and submission of analysis results for blind tests and preparation of analysis for open tests. - WP8-3 session: acquisition of ISTP Database and recent reports for two experiments (EPICUR/PARIS), review of technical issues in ST/FP, and selection of its domestic application field. The foregoing research results and experimental database for main SA issues obtained by this research are expected to be used for resolving SA issue remained in domestic NPPs (operating, to be constructed, future) and enhancing the evaluating capability of Level-2 PSA

  4. Adaptation in Coding by Large Populations of Neurons in the Retina

    Science.gov (United States)

    Ioffe, Mark L.

    A comprehensive theory of neural computation requires an understanding of the statistical properties of the neural population code. The focus of this work is the experimental study and theoretical analysis of the statistical properties of neural activity in the tiger salamander retina. This is an accessible yet complex system, for which we control the visual input and record from a substantial portion--greater than a half--of the ganglion cell population generating the spiking output. Our experiments probe adaptation of the retina to visual statistics: a central feature of sensory systems which have to adjust their limited dynamic range to a far larger space of possible inputs. In Chapter 1 we place our work in context with a brief overview of the relevant background. In Chapter 2 we describe the experimental methodology of recording from 100+ ganglion cells in the tiger salamander retina. In Chapter 3 we first present the measurements of adaptation of individual cells to changes in stimulation statistics and then investigate whether pairwise correlations in fluctuations of ganglion cell activity change across different stimulation conditions. We then transition to a study of the population-level probability distribution of the retinal response captured with maximum-entropy models. Convergence of the model inference is presented in Chapter 4. In Chapter 5 we first test the empirical presence of a phase transition in such models fitting the retinal response to different experimental conditions, and then proceed to develop other characterizations which are sensitive to complexity in the interaction matrix. This includes an analysis of the dynamics of sampling at finite temperature, which demonstrates a range of subtle attractor-like properties in the energy landscape. These are largely conserved when ambient illumination is varied 1000-fold, a result not necessarily apparent from the measured low-order statistics of the distribution. Our results form a consistent

  5. Atlas C++ Coding Standard Specification

    CERN Document Server

    Albrand, S; Barberis, D; Bosman, M; Jones, B; Stavrianakou, M; Arnault, C; Candlin, D; Candlin, R; Franck, E; Hansl-Kozanecka, Traudl; Malon, D; Qian, S; Quarrie, D; Schaffer, R D

    2001-01-01

    This document defines the ATLAS C++ coding standard, that should be adhered to when writing C++ code. It has been adapted from the original "PST Coding Standard" document (http://pst.cern.ch/HandBookWorkBook/Handbook/Programming/programming.html) CERN-UCO/1999/207. The "ATLAS standard" comprises modifications, further justification and examples for some of the rules in the original PST document. All changes were discussed in the ATLAS Offline Software Quality Control Group and feedback from the collaboration was taken into account in the "current" version.

  6. Nevada Administrative Code for Special Education Programs.

    Science.gov (United States)

    Nevada State Dept. of Education, Carson City. Special Education Branch.

    This document presents excerpts from Chapter 388 of the Nevada Administrative Code, which concerns definitions, eligibility, and programs for students who are disabled or gifted/talented. The first section gathers together 36 relevant definitions from the Code for such concepts as "adaptive behavior,""autism,""gifted and…

  7. The Nudo, Rollo, Melon codes and nodal correlations

    International Nuclear Information System (INIS)

    Perlado, J.M.; Aragones, J.M.; Minguez, E.; Pena, J.

    1975-01-01

    Analysis of nodal calculation and checking results by the reference reactor experimental data. Nudo code description, adapting experimental data to nodal calculations. Rollo, Melon codes as improvement in the cycle life calculations of albedos, mixing parameters and nodal correlations. (author)

  8. Quantum computing with Majorana fermion codes

    Science.gov (United States)

    Litinski, Daniel; von Oppen, Felix

    2018-05-01

    We establish a unified framework for Majorana-based fault-tolerant quantum computation with Majorana surface codes and Majorana color codes. All logical Clifford gates are implemented with zero-time overhead. This is done by introducing a protocol for Pauli product measurements with tetrons and hexons which only requires local 4-Majorana parity measurements. An analogous protocol is used in the fault-tolerant setting, where tetrons and hexons are replaced by Majorana surface code patches, and parity measurements are replaced by lattice surgery, still only requiring local few-Majorana parity measurements. To this end, we discuss twist defects in Majorana fermion surface codes and adapt the technique of twist-based lattice surgery to fermionic codes. Moreover, we propose a family of codes that we refer to as Majorana color codes, which are obtained by concatenating Majorana surface codes with small Majorana fermion codes. Majorana surface and color codes can be used to decrease the space overhead and stabilizer weight compared to their bosonic counterparts.

  9. A software reconfigurable optical multiband UWB system utilizing a bit-loading combined with adaptive LDPC code rate scheme

    Science.gov (United States)

    He, Jing; Dai, Min; Chen, Qinghui; Deng, Rui; Xiang, Changqing; Chen, Lin

    2017-07-01

    In this paper, an effective bit-loading combined with adaptive LDPC code rate algorithm is proposed and investigated in software reconfigurable multiband UWB over fiber system. To compensate the power fading and chromatic dispersion for the high frequency of multiband OFDM UWB signal transmission over standard single mode fiber (SSMF), a Mach-Zehnder modulator (MZM) with negative chirp parameter is utilized. In addition, the negative power penalty of -1 dB for 128 QAM multiband OFDM UWB signal are measured at the hard-decision forward error correction (HD-FEC) limitation of 3.8 × 10-3 after 50 km SSMF transmission. The experimental results show that, compared to the fixed coding scheme with the code rate of 75%, the signal-to-noise (SNR) is improved by 2.79 dB for 128 QAM multiband OFDM UWB system after 100 km SSMF transmission using ALCR algorithm. Moreover, by employing bit-loading combined with ALCR algorithm, the bit error rate (BER) performance of system can be further promoted effectively. The simulation results present that, at the HD-FEC limitation, the value of Q factor is improved by 3.93 dB at the SNR of 19.5 dB over 100 km SSMF transmission, compared to the fixed modulation with uncoded scheme at the same spectrum efficiency (SE).

  10. Molecular adaptation during adaptive radiation in the Hawaiian endemic genus Schiedea.

    Directory of Open Access Journals (Sweden)

    Maxim V Kapralov

    2006-12-01

    Full Text Available "Explosive" adaptive radiations on islands remain one of the most puzzling evolutionary phenomena. The rate of phenotypic and ecological adaptations is extremely fast during such events, suggesting that many genes may be under fairly strong selection. However, no evidence for adaptation at the level of protein coding genes was found, so it has been suggested that selection may work mainly on regulatory elements. Here we report the first evidence that positive selection does operate at the level of protein coding genes during rapid adaptive radiations. We studied molecular adaptation in Hawaiian endemic plant genus Schiedea (Caryophyllaceae, which includes closely related species with a striking range of morphological and ecological forms, varying from rainforest vines to woody shrubs growing in desert-like conditions on cliffs. Given the remarkable difference in photosynthetic performance between Schiedea species from different habitats, we focused on the "photosynthetic" Rubisco enzyme, the efficiency of which is known to be a limiting step in plant photosynthesis.We demonstrate that the chloroplast rbcL gene, encoding the large subunit of Rubisco enzyme, evolved under strong positive selection in Schiedea. Adaptive amino acid changes occurred in functionally important regions of Rubisco that interact with Rubisco activase, a chaperone which promotes and maintains the catalytic activity of Rubisco. Interestingly, positive selection acting on the rbcL might have caused favorable cytotypes to spread across several Schiedea species.We report the first evidence for adaptive changes at the DNA and protein sequence level that may have been associated with the evolution of photosynthetic performance and colonization of new habitats during a recent adaptive radiation in an island plant genus. This illustrates how small changes at the molecular level may change ecological species performance and helps us to understand the molecular bases of extremely

  11. Adaptive Relay Activation in the Network Coding Protocols

    DEFF Research Database (Denmark)

    Pahlevani, Peyman; Roetter, Daniel Enrique Lucani; Fitzek, Frank

    2015-01-01

    State-of-the-art Network coding based routing protocols exploit the link quality information to compute the transmission rate in the intermediate nodes. However, the link quality discovery protocols are usually inaccurate, and introduce overhead in wireless mesh networks. In this paper, we presen...

  12. Improvements to SOIL: An Eulerian hydrodynamics code

    International Nuclear Information System (INIS)

    Davis, C.G.

    1988-04-01

    Possible improvements to SOIL, an Eulerian hydrodynamics code that can do coupled radiation diffusion and strength of materials, are presented in this report. Our research is based on the inspection of other Eulerian codes and theoretical reports on hydrodynamics. Several conclusions from the present study suggest that some improvements are in order, such as second-order advection, adaptive meshes, and speedup of the code by vectorization and/or multitasking. 29 refs., 2 figs

  13. Mistranslation: from adaptations to applications.

    Science.gov (United States)

    Hoffman, Kyle S; O'Donoghue, Patrick; Brandl, Christopher J

    2017-11-01

    The conservation of the genetic code indicates that there was a single origin, but like all genetic material, the cell's interpretation of the code is subject to evolutionary pressure. Single nucleotide variations in tRNA sequences can modulate codon assignments by altering codon-anticodon pairing or tRNA charging. Either can increase translation errors and even change the code. The frozen accident hypothesis argued that changes to the code would destabilize the proteome and reduce fitness. In studies of model organisms, mistranslation often acts as an adaptive response. These studies reveal evolutionary conserved mechanisms to maintain proteostasis even during high rates of mistranslation. This review discusses the evolutionary basis of altered genetic codes, how mistranslation is identified, and how deviations to the genetic code are exploited. We revisit early discoveries of genetic code deviations and provide examples of adaptive mistranslation events in nature. Lastly, we highlight innovations in synthetic biology to expand the genetic code. The genetic code is still evolving. Mistranslation increases proteomic diversity that enables cells to survive stress conditions or suppress a deleterious allele. Genetic code variants have been identified by genome and metagenome sequence analyses, suppressor genetics, and biochemical characterization. Understanding the mechanisms of translation and genetic code deviations enables the design of new codes to produce novel proteins. Engineering the translation machinery and expanding the genetic code to incorporate non-canonical amino acids are valuable tools in synthetic biology that are impacting biomedical research. This article is part of a Special Issue entitled "Biochemistry of Synthetic Biology - Recent Developments" Guest Editor: Dr. Ilka Heinemann and Dr. Patrick O'Donoghue. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Aerodynamic Test Facility Requirements for Defence R&D to 2000 and Beyond.

    Science.gov (United States)

    1982-09-01

    Defence Force. Following its review of science and technology, the Australian Science and Technology Council ( ASTEC ) reported I that the present pattern...Organisation (DSTO) within the Department of Defence. Accordingly, ASTEC recommended to the Prime Minister that the Department of Defence be asked to develop...DSTO2 as well as by ASTEC 1 . An additional reason for choosing aerodynamics for early consideration in response to ASTEC’s recommendation is that wind

  15. Efficient Spike-Coding with Multiplicative Adaptation in a Spike Response Model

    NARCIS (Netherlands)

    S.M. Bohte (Sander)

    2012-01-01

    htmlabstractNeural adaptation underlies the ability of neurons to maximize encoded informa- tion over a wide dynamic range of input stimuli. While adaptation is an intrinsic feature of neuronal models like the Hodgkin-Huxley model, the challenge is to in- tegrate adaptation in models of neural

  16. Benchmarking LWR codes capability to model radionuclide deposition within SFR containments: An analysis of the Na ABCOVE tests

    International Nuclear Information System (INIS)

    Herranz, Luis E.; Garcia, Monica; Morandi, Sonia

    2013-01-01

    Highlights: • Assessment of LWR codes capability to model aerosol deposition within SFR containments. • Original hypotheses proposed to partially accommodate drawbacks from Na oxidation reactions. • A defined methodology to derive a more accurate characterization of Na-based particles. • Key missing models in LWR codes for SFR applications are identified. - Abstract: Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide transport, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide deposition, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. The present paper is aimed at assessing the current capability of LWR codes to model aerosol deposition within a SFR containment under BDBA conditions. Through a systematic application of the ASTEC, ECART and MELCOR codes to relevant ABCOVE tests, insights have been gained into drawbacks and capabilities of these computation tools. Hypotheses and approximations have been adopted so that

  17. Determination of some radiative view factors

    International Nuclear Information System (INIS)

    Ghosh, B.; Mukhopadhyay, D.; Lele, H.G.; Fichot, F.; Guillard, G.

    2011-01-01

    View factors are essential components for analysis for of radiative heat transfer through enclosure methods like radiosity approach, direct/total exchange area approach etc. View factor is defined as the integral over the interacting surface. View factor integral can be calculated by following various approaches, such as: view factor algebra, direction analytical approach, contour integration method, Monte Carlo method, numerical methods based of FDM or FEM, Hottle's string method etc. The present module of work on determination of view factor is aimed for use in ASTEC code system for severe accident analysis. There exist many routines (RADB, RADC, GRADEB, RADR, RADL) in the ICARE module of ASTEC code system to model radiative heat transfer from different types of assemblies of interacting surfaces of different nature. The present work is specially targeted for radiative heat transfer model for lower plenum (RADILOWE) and for extension of ICARE module for IPHWR. In interacting surfaces within the lower plenum comprises of different types of circular, cylindrical and conical surface. In the work completed so far, view factor relations have been derived/compiled based on exact/approximate analytical and numerical approaches. (author)

  18. Computationally Efficient Blind Code Synchronization for Asynchronous DS-CDMA Systems with Adaptive Antenna Arrays

    Directory of Open Access Journals (Sweden)

    Chia-Chang Hu

    2005-04-01

    Full Text Available A novel space-time adaptive near-far robust code-synchronization array detector for asynchronous DS-CDMA systems is developed in this paper. There are the same basic requirements that are needed by the conventional matched filter of an asynchronous DS-CDMA system. For the real-time applicability, a computationally efficient architecture of the proposed detector is developed that is based on the concept of the multistage Wiener filter (MWF of Goldstein and Reed. This multistage technique results in a self-synchronizing detection criterion that requires no inversion or eigendecomposition of a covariance matrix. As a consequence, this detector achieves a complexity that is only a linear function of the size of antenna array (J, the rank of the MWF (M, the system processing gain (N, and the number of samples in a chip interval (S, that is, 𝒪(JMNS. The complexity of the equivalent detector based on the minimum mean-squared error (MMSE or the subspace-based eigenstructure analysis is a function of 𝒪((JNS3. Moreover, this multistage scheme provides a rapid adaptive convergence under limited observation-data support. Simulations are conducted to evaluate the performance and convergence behavior of the proposed detector with the size of the J-element antenna array, the amount of the L-sample support, and the rank of the M-stage MWF. The performance advantage of the proposed detector over other DS-CDMA detectors is investigated as well.

  19. Nine-year-old children use norm-based coding to visually represent facial expression.

    Science.gov (United States)

    Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian

    2013-10-01

    Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  20. A New Video Coding Algorithm Using 3D-Subband Coding and Lattice Vector Quantization

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J.H. [Taejon Junior College, Taejon (Korea, Republic of); Lee, K.Y. [Sung Kyun Kwan University, Suwon (Korea, Republic of)

    1997-12-01

    In this paper, we propose an efficient motion adaptive 3-dimensional (3D) video coding algorithm using 3D subband coding (3D-SBC) and lattice vector quantization (LVQ) for low bit rate. Instead of splitting input video sequences into the fixed number of subbands along the temporal axes, we decompose them into temporal subbands of variable size according to motions in frames. Each spatio-temporally splitted 7 subbands are partitioned by quad tree technique and coded with lattice vector quantization(LVQ). The simulation results show 0.1{approx}4.3dB gain over H.261 in peak signal to noise ratio(PSNR) at low bit rate (64Kbps). (author). 13 refs., 13 figs., 4 tabs.

  1. Information preserving coding for multispectral data

    Science.gov (United States)

    Duan, J. R.; Wintz, P. A.

    1973-01-01

    A general formulation of the data compression system is presented. A method of instantaneous expansion of quantization levels by reserving two codewords in the codebook to perform a folding over in quantization is implemented for error free coding of data with incomplete knowledge of the probability density function. Results for simple DPCM with folding and an adaptive transform coding technique followed by a DPCM technique are compared using ERTS-1 data.

  2. Overview of IRSN R and D on NPP safety, with focus on severe accident

    International Nuclear Information System (INIS)

    Van Dorsselaere, Jean-Pierre

    2015-01-01

    IRSN contributes to the continuous improvement of safety level of Gen.II and III reactors, with the aim to approach for Gen.II the target safety level of Gen.III. This needs to build the necessary knowledge to appreciate margins for safety important systems, structures and components in the frame of plant operation life extension beyond 40 years. Research is a major IRSN mission that is tightly linked to expertise needs: it involves 40% of overall budget and, out of radiation protection and safety of waste disposal, around 280 scientists. IRSN has acquired a huge experience in the last 30 years on severe accidents, both on experimental and theoretical aspects, in particular through management of large international research programmes like the Phébus. FP integral experiments in the last 20 years and the coordination of the SARNET network of excellence that continues now in the frame of the NUGENIA European association. Besides, IRSN is developing, in collaboration with GRS (Germany), the integral system code ASTEC that is considered now as the European reference code due to the continuous capitalization of all the international knowledge. The presentation summarizes the ongoing IRSN research on the different phenomena involved in severe accidents, with more focus in the last years on mitigation devices or measures, i.e. for in-vessel and ex-vessel corium coolability, hydrogen explosion risk and source term. IRSN leads several international projects in Euratom frame (such as CESAM on ASTEC, PASSAM on source term mitigation, and IVMR on in-vessel corium retention) or OECD/NEA/CSNI (such as STEM). Moreover, several national projects on the above issues are ongoing with the French actors in this domain. Collaboration between IRSN and India is very active and efficient on ASTEC code with BARC and AERB, in particular through PHWR model development and assessment, and could be extended in the future to other issues either on severe accidents or on other Topics. (author)

  3. Latest improvements on TRACPWR six-equations thermohydraulic code

    International Nuclear Information System (INIS)

    Rivero, N.; Batuecas, T.; Martinez, R.; Munoz, J.; Lenhardt, G.; Serrano, P.

    1999-01-01

    The paper presents the latest improvements on TRACPWR aimed at adapting the code to present trends on computer platforms, architectures and training requirements as well as extending the scope of the code itself and its applicability to other technologies different from Westinghouse PWR one. Firstly major features of TRACPWR as best estimate and real time simulation code are summed, then the areas where TRACPWR is being improved are presented. These areas comprising: (1) Architecture: integrating TRACPWR and RELAP5 codes, (2) Code scope enhancement: modelling the Mid-Loop operation, (3) Code speed-up: applying parallelization techniques, (4) Code platform downswing: porting to Windows N1 platform, (5) On-line performance: allowing simulation initialisation from a Plant Process Computer, and (6) Code scope extension: using the code for modelling VVER and PHWR technology. (author)

  4. Code bench-marking for long-term tracking and adaptive algorithms

    OpenAIRE

    Schmidt, Frank; Alexahin, Yuri; Amundson, James; Bartosik, Hannes; Franchetti, Giuliano; Holmes, Jeffrey; Huschauer, Alexander; Kapin, Valery; Oeftiger, Adrian; Stern, Eric; Titze, Malte

    2016-01-01

    At CERN we have ramped up a program to investigate space charge effects in the LHC pre-injectors with high brightness beams and long storage times. This in view of the LIU upgrade project for these accelerators. These studies require massive simulation over large number of turns. To this end we have been looking at all available codes and started collaborations on code development with several laboratories: pyORBIT from SNS, SYNERGIA from Fermilab, MICROMAP from GSI and our in-house MAD-X cod...

  5. A Bayesian approach to the modelling of α Cen A

    DEFF Research Database (Denmark)

    Bazot, M.; Bourguignon, S.; Christensen-Dalsgaard, J.

    2012-01-01

    the stellar evolutionary code astec to model the star. To constrain this model both seismic and non-seismic observations were considered. Several different strategies were tested to fit these values, using either two free parameters or five free parameters in astec. We are thus able to show evidence that MCMC...... methods become efficient with respect to more classical grid-based strategies when the number of parameters increases. The results of our MCMC algorithm allow us to derive estimates for the stellar parameters and robust uncertainties thanks to the statistical analysis of the posterior probability...... densities. We are also able to compute odds for the presence of a convective core in α Cen A. When using core-sensitive seismic observational constraints, these can rise above ∼40 per cent. The comparison of results to previous studies also indicates that these seismic constraints are of critical importance...

  6. Simulation codes and the impact of validation/uncertainty requirements

    International Nuclear Information System (INIS)

    Sills, H.E.

    1995-01-01

    Several of the OECD/CSNI members have adapted a proposed methodology for code validation and uncertainty assessment. Although the validation process adapted by members has a high degree of commonality, the uncertainty assessment processes selected are more variable, ranaing from subjective to formal. This paper describes the validation and uncertainty assessment process, the sources of uncertainty, methods of reducing uncertainty, and methods of assessing uncertainty.Examples are presented from the Ontario Hydro application of the validation methodology and uncertainty assessment to the system thermal hydraulics discipline and the TUF (1) system thermal hydraulics code. (author)

  7. Development of code PRETOR for stellarator simulation

    International Nuclear Information System (INIS)

    Dies, J.; Fontanet, J.; Fontdecaba, J.M.; Castejon, F.; Alejandre, C.

    1998-01-01

    The Department de Fisica i Enginyeria Nuclear (DFEN) of the UPC has some experience in the development of the transport code PRETOR. This code has been validated with shots of DIII-D, JET and TFTR, it has also been used in the simulation of operational scenarios of ITER fast burnt termination. Recently, the association EURATOM-CIEMAT has started the operation of the TJ-II stellarator. Due to the need of validating the results given by others transport codes applied to stellarators and because all of them made some approximations, as a averaging magnitudes in each magnetic surface, it was thought suitable to adapt the PRETOR code to devices without axial symmetry, like stellarators, which is very suitable for the specific needs of the study of TJ-II. Several modifications are required in PRETOR; the main concerns to the models of: magnetic equilibrium, geometry and transport of energy and particles. In order to solve the complex magnetic equilibrium geometry the powerful numerical code VMEC has been used. This code gives the magnetic surface shape as a Fourier series in terms of the harmonics (m,n). Most of the geometric magnitudes are also obtained from the VMEC results file. The energy and particle transport models will be replaced by other phenomenological models that are better adapted to stellarator simulation. Using the proposed models, it is pretended to reproduce experimental data available from present stellarators, given especial attention to the TJ-II of the association EURATOM-CIEMAT. (Author)

  8. A seismic data compression system using subband coding

    Science.gov (United States)

    Kiely, A. B.; Pollara, F.

    1995-01-01

    This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  9. Automatic code generation in practice

    DEFF Research Database (Denmark)

    Adam, Marian Sorin; Kuhrmann, Marco; Schultz, Ulrik Pagh

    2016-01-01

    -specific language to specify those requirements and to allow for generating a safety-enforcing layer of code, which is deployed to the robot. The paper at hand reports experiences in practically applying code generation to mobile robots. For two cases, we discuss how we addressed challenges, e.g., regarding weaving......Mobile robots often use a distributed architecture in which software components are deployed to heterogeneous hardware modules. Ensuring the consistency with the designed architecture is a complex task, notably if functional safety requirements have to be fulfilled. We propose to use a domain...... code generation into proprietary development environments and testing of manually written code. We find that a DSL based on the same conceptual model can be used across different kinds of hardware modules, but a significant adaptation effort is required in practical scenarios involving different kinds...

  10. Bit-wise arithmetic coding for data compression

    Science.gov (United States)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  11. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    Science.gov (United States)

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  12. Uplink capacity of multi-class IEEE 802.16j relay networks with adaptive modulation and coding

    DEFF Research Database (Denmark)

    Wang, Hua; Xiong, C; Iversen, Villy Bæk

    2009-01-01

    The emerging IEEE 802.16j mobile multi-hop relay (MMR) network is currently being developed to increase the user throughput and extend the service coverage as an enhancement of existing 802.16e standard. In 802.16j, the intermediate relay stations (RSs) help the base station (BS) communicate...... with those mobile stations (MSs) that are either too far away from the BS or placed in an area where direct communication with BS experiences unsatisfactory level of service. In this paper, we investigate the uplink Erlang capacity of a two-hop 802.16j relay system supporting both voice and data traffics...... with adaptive modulation and coding (AMC) scheme applied in the physical layer. We first develop analytical models to calculate the blocking probability in the access zone and the outage probability in the relay zone, respectively. Then a joint algorithm is proposed to determine the bandwidth distribution...

  13. The Redox Code.

    Science.gov (United States)

    Jones, Dean P; Sies, Helmut

    2015-09-20

    The redox code is a set of principles that defines the positioning of the nicotinamide adenine dinucleotide (NAD, NADP) and thiol/disulfide and other redox systems as well as the thiol redox proteome in space and time in biological systems. The code is richly elaborated in an oxygen-dependent life, where activation/deactivation cycles involving O₂ and H₂O₂ contribute to spatiotemporal organization for differentiation, development, and adaptation to the environment. Disruption of this organizational structure during oxidative stress represents a fundamental mechanism in system failure and disease. Methodology in assessing components of the redox code under physiological conditions has progressed, permitting insight into spatiotemporal organization and allowing for identification of redox partners in redox proteomics and redox metabolomics. Complexity of redox networks and redox regulation is being revealed step by step, yet much still needs to be learned. Detailed knowledge of the molecular patterns generated from the principles of the redox code under defined physiological or pathological conditions in cells and organs will contribute to understanding the redox component in health and disease. Ultimately, there will be a scientific basis to a modern redox medicine.

  14. Simplified modeling and code usage in the PASC-3 code system by the introduction of a programming environment

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Oppe, J.; Oudshoorn, H.L.; Slobben, J.

    1991-06-01

    A brief description is given of the PASC-3 (Petten-AMPX-SCALE) Reactor Physics code system and associated UNIPASC work environment. The PASC-3 code system is used for criticality and reactor calculations and consists of a selection from the Oak Ridge National Laboratory AMPX-SCALE-3 code collection complemented with a number of additional codes and nuclear data bases. The original codes have been adapted to run under the UNIX operating system. The recommended nuclear data base is a complete 219 group cross section library derived from JEF-1 of which some benchmark results are presented. By the addition of the UNIPASC work environment the usage of the code system is greatly simplified. Complex chains of programs can easily be coupled together to form a single job. In addition, the model parameters can be represented by variables instead of literal values which enhances the readability and may improve the integrity of the code inputs. (author). 8 refs.; 6 figs.; 1 tab

  15. Adaptive Network Coded Clouds: High Speed Downloads and Cost-Effective Version Control

    DEFF Research Database (Denmark)

    Sipos, Marton A.; Heide, Janus; Roetter, Daniel Enrique Lucani

    2018-01-01

    Although cloud systems provide a reliable and flexible storage solution, the use of a single cloud service constitutes a single point of failure, which can compromise data availability, download speed, and security. To address these challenges, we advocate for the use of multiple cloud storage...... providers simultaneously using network coding as the key enabling technology. Our goal is to study two challenges of network coded storage systems. First, the efficient update of the number of coded fragments per cloud in a system aggregating multiple clouds in order to boost the download speed of files. We...... developed a novel scheme using recoding with limited packets to trade-off storage space, reliability, and data retrieval speed. Implementation and measurements with commercial cloud providers show that up to 9x less network use is needed compared to other network coding schemes, while maintaining similar...

  16. Dengue virus genomic variation associated with mosquito adaptation defines the pattern of viral non-coding RNAs and fitness in human cells.

    Directory of Open Access Journals (Sweden)

    Claudia V Filomatori

    2017-03-01

    Full Text Available The Flavivirus genus includes a large number of medically relevant pathogens that cycle between humans and arthropods. This host alternation imposes a selective pressure on the viral population. Here, we found that dengue virus, the most important viral human pathogen transmitted by insects, evolved a mechanism to differentially regulate the production of viral non-coding RNAs in mosquitos and humans, with a significant impact on viral fitness in each host. Flavivirus infections accumulate non-coding RNAs derived from the viral 3'UTRs (known as sfRNAs, relevant in viral pathogenesis and immune evasion. We found that dengue virus host adaptation leads to the accumulation of different species of sfRNAs in vertebrate and invertebrate cells. This process does not depend on differences in the host machinery; but it was found to be dependent on the selection of specific mutations in the viral 3'UTR. Dissecting the viral population and studying phenotypes of cloned variants, the molecular determinants for the switch in the sfRNA pattern during host change were mapped to a single RNA structure. Point mutations selected in mosquito cells were sufficient to change the pattern of sfRNAs, induce higher type I interferon responses and reduce viral fitness in human cells, explaining the rapid clearance of certain viral variants after host change. In addition, using epidemic and pre-epidemic Zika viruses, similar patterns of sfRNAs were observed in mosquito and human infected cells, but they were different from those observed during dengue virus infections, indicating that distinct selective pressures act on the 3'UTR of these closely related viruses. In summary, we present a novel mechanism by which dengue virus evolved an RNA structure that is under strong selective pressure in the two hosts, as regulator of non-coding RNA accumulation and viral fitness. This work provides new ideas about the impact of host adaptation on the variability and evolution of

  17. Fission products transport in CANDU Primary Heat Transport System in a severe accident

    International Nuclear Information System (INIS)

    Constantin, M.; Rizoiu, A.; Turcu, I.; Negut, Gh.

    2005-01-01

    Full text: The paper is intended to analyse the distribution of the fission products (FPs) in CANDU Primary Heat Transport (PHT) System by using the ASTEC code (Accident Source Term Evaluation Code). The complexity of the data required by ASTEC and the complexity of CANDU PHT were strong motivation to begin with a simplified geometry in order to avoid the introducing of unmanageable errors at the level of input deck. Thus only 1/4 of the PHT circuit was simulated, an simplified FPs inventory and some simplifications in the feeders geometry were also used. The circuit consists of 95 horizontal fuel channels connected to 95 horizontal out-feeders, then through vertical feeders to the outlet-header (a big pipe that collects the water from feeders); the circuit continues from the outlet-header with a riser and then with the steam generator and a pump. After this pump, the circuit was broken; in this point the FPs are transferred to the containment. The data related to the nodes' definitions, temperatures and pressure conditions were chosen as possible as real data from CANDU NPP loss of coolant accident sequence. Temperature and pressure conditions in the time of the accident were calculated by CATHENA code and the source term of FPs introduced into the PHT was estimated by ORIGEN code. The results consist of mass distributions in the nodes of the circuit and the mass transfer to the containment through the break for different species (FPs and chemical species). The study is completed by sensitivity analysis for the parameters with important uncertainties. (authors)

  18. Adaptive Core Simulation Employing Discrete Inverse Theory - Part II: Numerical Experiments

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-01-01

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. The companion paper, ''Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory,'' describes in detail the theoretical background of the proposed adaptive techniques. This paper, Part II, demonstrates several computational experiments conducted to assess the fidelity and robustness of the proposed techniques. The intent is to check the ability of the adapted core simulator model to predict future core observables that are not included in the adaption or core observables that are recorded at core conditions that differ from those at which adaption is completed. Also, this paper demonstrates successful utilization of an efficient sensitivity analysis approach to calculate the sensitivity information required to perform the adaption for millions of input core parameters. Finally, this paper illustrates a useful application for adaptive simulation - reducing the inconsistencies between two different core simulator code systems, where the multitudes of input data to one code are adjusted to enhance the agreement between both codes for important core attributes, i.e., core reactivity and power distribution. Also demonstrated is the robustness of such an application

  19. Coding with partially hidden Markov models

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Rissanen, J.

    1995-01-01

    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  20. An approach enabling adaptive FEC for OFDM in fiber-VLLC system

    Science.gov (United States)

    Wei, Yiran; He, Jing; Deng, Rui; Shi, Jin; Chen, Shenghai; Chen, Lin

    2017-12-01

    In this paper, we propose an orthogonal circulant matrix transform (OCT)-based adaptive frame-level-forward error correction (FEC) scheme for fiber-visible laser light communication (VLLC) system and experimentally demonstrate by Reed-Solomon (RS) Code. In this method, no extra bits are spent for adaptive message, except training sequence (TS), which is simultaneously used for synchronization and channel estimation. Therefore, RS-coding can be adaptively performed frames by frames via the last received codeword-error-rate (CER) feedback estimated by the TSs of the previous few OFDM frames. In addition, the experimental results exhibit that over 20 km standard single-mode fiber (SSMF) and 8 m visible light transmission, the costs of RS codewords are at most 14.12% lower than those of conventional adaptive subcarrier-RS-code based 16-QAM OFDM at bit error rate (BER) of 10-5.

  1. The code of ethics for nurses.

    Science.gov (United States)

    Zahedi, F; Sanjari, M; Aala, M; Peymani, M; Aramesh, K; Parsapour, A; Maddah, Ss Bagher; Cheraghi, Ma; Mirzabeigi, Gh; Larijani, B; Dastgerdi, M Vahid

    2013-01-01

    Nurses are ever-increasingly confronted with complex concerns in their practice. Codes of ethics are fundamental guidance for nursing as many other professions. Although there are authentic international codes of ethics for nurses, the national code would be the additional assistance provided for clinical nurses in their complex roles in care of patients, education, research and management of some parts of health care system in the country. A national code can provide nurses with culturally-adapted guidance and help them to make ethical decisions more closely to the Iranian-Islamic background. Given the general acknowledgement of the need, the National Code of Ethics for Nurses was compiled as a joint project (2009-2011). The Code was approved by the Health Policy Council of the Ministry of Health and Medical Education and communicated to all universities, healthcare centers, hospitals and research centers early in 2011. The focus of this article is on the course of action through which the Code was compiled, amended and approved. The main concepts of the code will be also presented here. No doubt, development of the codes should be considered as an ongoing process. This is an overall responsibility to keep the codes current, updated with the new progresses of science and emerging challenges, and pertinent to the nursing practice.

  2. Benchmarking LWR codes capability to model radionuclide deposition within SFR containments: An analysis of the Na ABCOVE tests

    Energy Technology Data Exchange (ETDEWEB)

    Herranz, Luis E., E-mail: luisen.herranz@ciemat.es [CIEMAT, Unit of Nuclear Safety Research, Av. Complutense, 40, 28040 Madrid (Spain); Garcia, Monica, E-mail: monica.gmartin@ciemat.es [CIEMAT, Unit of Nuclear Safety Research, Av. Complutense, 40, 28040 Madrid (Spain); Morandi, Sonia, E-mail: sonia.morandi@rse-web.it [Nuclear and Industrial Plant Safety Team, Power Generation System Department, RSE, via Rubattino 54, 20134 Milano (Italy)

    2013-12-15

    Highlights: • Assessment of LWR codes capability to model aerosol deposition within SFR containments. • Original hypotheses proposed to partially accommodate drawbacks from Na oxidation reactions. • A defined methodology to derive a more accurate characterization of Na-based particles. • Key missing models in LWR codes for SFR applications are identified. - Abstract: Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide transport, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide deposition, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. The present paper is aimed at assessing the current capability of LWR codes to model aerosol deposition within a SFR containment under BDBA conditions. Through a systematic application of the ASTEC, ECART and MELCOR codes to relevant ABCOVE tests, insights have been gained into drawbacks and capabilities of these computation tools. Hypotheses and approximations have

  3. Uniform Circular Antenna Array Applications in Coded DS-CDMA Mobile Communication Systems

    National Research Council Canada - National Science Library

    Seow, Tian

    2003-01-01

    ...) has greatly increased. This thesis examines the use of an equally spaced circular adaptive antenna array at the mobile station for a typical coded direct sequence code division multiple access (DS-CDMA...

  4. Power Allocation Optimization: Linear Precoding Adapted to NB-LDPC Coded MIMO Transmission

    Directory of Open Access Journals (Sweden)

    Tarek Chehade

    2015-01-01

    Full Text Available In multiple-input multiple-output (MIMO transmission systems, the channel state information (CSI at the transmitter can be used to add linear precoding to the transmitted signals in order to improve the performance and the reliability of the transmission system. This paper investigates how to properly join precoded closed-loop MIMO systems and nonbinary low density parity check (NB-LDPC. The q elements in the Galois field, GF(q, are directly mapped to q transmit symbol vectors. This allows NB-LDPC codes to perfectly fit with a MIMO precoding scheme, unlike binary LDPC codes. The new transmission model is detailed and studied for several linear precoders and various designed LDPC codes. We show that NB-LDPC codes are particularly well suited to be jointly used with precoding schemes based on the maximization of the minimum Euclidean distance (max-dmin criterion. These results are theoretically supported by extrinsic information transfer (EXIT analysis and are confirmed by numerical simulations.

  5. Studies on the role of molybdenum on iodine transport in the RCS in nuclear severe accident conditions

    International Nuclear Information System (INIS)

    Grégoire, A.-C.; Kalilainen, J.; Cousin, F.; Mutelle, H.; Cantrel, L.; Auvinen, A.; Haste, T.; Sobanska, S.

    2015-01-01

    Highlights: • In oxidising conditions, Mo reacts with Cs and thus promotes gaseous iodine release. • In reducing conditions, CsI remains the dominant form for released iodine. • The nature of released iodine is well reproduced by the ASTEC code. - Abstract: The effect of molybdenum on iodine transport in the reactor coolant system (RCS) under PWR severe accident conditions was investigated in the framework of the EU SARNET project. Experiments were conducted at the VTT-Institute and at IRSN and simulations of the experimental results were performed with the ASTEC severe accident simulation code. As molybdenum affects caesium chemistry by formation of molybdates, it may have a significant impact on iodine transport in the RCS. Experimentally it has been shown that the formation of gaseous iodine is promoted in oxidising conditions, as caesium can be completely consumed to form caesium polymolybdates and is thus not available for reacting with gaseous iodine and leading to CsI aerosols. In reducing conditions, CsI remains the dominant form of iodine, as the amount of oxygen is not sufficient to allow formation of quantitative caesium polymolybdates. An I–Mo–Cs model has been developed and it reproduces well the experimental trends on iodine transport

  6. Recent severe accident research synthesis of the major outcomes from the SARNET network

    Energy Technology Data Exchange (ETDEWEB)

    Van Dorsselaere, J.-P., E-mail: jean-pierre.van-dorsselaere@irsn.fr [Institut de Radioprotection et de Sûreté Nucléaire (IRSN), Saint-Paul-lez-Durance (France); Auvinen, A. [VTT Technical Research Centre, Espoo (Finland); Beraha, D. [Gesellschaft für Anlagen- und Reaktorsicherheit mbH (GRS), Köln (Germany); Chatelard, P. [Institut de Radioprotection et de Sûreté Nucléaire (IRSN), Saint-Paul-lez-Durance (France); Herranz, L.E. [Centro de Investigaciones Energéticas MedioAmbientales y Tecnológicas (CIEMAT), Madrid (Spain); Journeau, C. [Commissariat à l’Energie Atomique et aux Energies Alternatives (CEA), Paris (France); Klein-Hessling, W. [Gesellschaft für Anlagen- und Reaktorsicherheit mbH (GRS), Köln (Germany); Kljenak, I. [Jozef Stefan Institute (JSI), Ljubljana (Slovenia); Miassoedov, A. [Karlsruhe Institute of Technology (KIT), Karlsruhe (Germany); Paci, S. [University of Pisa, Pisa (Italy); Zeyen, R. [European Commission Joint Research Centre, Institute for Energy (JRC/IET), Petten (Netherlands)

    2015-09-15

    Highlights: • SARNET network of excellence integration mid-2013 in the NUGENIA Association. • Progress of knowledge on corium behaviour, hydrogen explosion and source term. • Further development of ASTEC integral code to capitalize knowledge. • Ranking of next R&D high priority issues accounting for international research. • Dissemination of knowledge through education courses and ERMSAR conferences. - Abstract: The SARNET network (Severe Accident Research NETwork of excellence), co-funded by the European Commission from 2004 to 2013, has allowed to significantly improve the knowledge on severe accidents and to disseminate it through courses and ERMSAR conferences. The major investigated topics, involving more than 250 researchers from 22 countries, were in- and ex-vessel corium/debris coolability, molten-core–concrete-interaction, steam explosion, hydrogen combustion and mitigation in containment, impact of oxidising conditions on source term, and iodine chemistry. The ranking of the high priority issues was updated to account for the results of recent international research and for the impact of Fukushima nuclear accidents in Japan. In addition, the ASTEC integral code was further developed to capitalize the new knowledge. The network has reached self-sustainability by integration in mid-2013 into the NUGENIA Association. The main activities and outcomes of the network are presented.

  7. Recent severe accident research synthesis of the major outcomes from the SARNET network

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.-P.; Auvinen, A.; Beraha, D.; Chatelard, P.; Herranz, L.E.; Journeau, C.; Klein-Hessling, W.; Kljenak, I.; Miassoedov, A.; Paci, S.; Zeyen, R.

    2015-01-01

    Highlights: • SARNET network of excellence integration mid-2013 in the NUGENIA Association. • Progress of knowledge on corium behaviour, hydrogen explosion and source term. • Further development of ASTEC integral code to capitalize knowledge. • Ranking of next R&D high priority issues accounting for international research. • Dissemination of knowledge through education courses and ERMSAR conferences. - Abstract: The SARNET network (Severe Accident Research NETwork of excellence), co-funded by the European Commission from 2004 to 2013, has allowed to significantly improve the knowledge on severe accidents and to disseminate it through courses and ERMSAR conferences. The major investigated topics, involving more than 250 researchers from 22 countries, were in- and ex-vessel corium/debris coolability, molten-core–concrete-interaction, steam explosion, hydrogen combustion and mitigation in containment, impact of oxidising conditions on source term, and iodine chemistry. The ranking of the high priority issues was updated to account for the results of recent international research and for the impact of Fukushima nuclear accidents in Japan. In addition, the ASTEC integral code was further developed to capitalize the new knowledge. The network has reached self-sustainability by integration in mid-2013 into the NUGENIA Association. The main activities and outcomes of the network are presented

  8. Adaptive DSP Algorithms for UMTS: Blind Adaptive MMSE and PIC Multiuser Detection

    NARCIS (Netherlands)

    Potman, J.

    2003-01-01

    A study of the application of blind adaptive Minimum Mean Square Error (MMSE) and Parallel Interference Cancellation (PIC) multiuser detection techniques to Wideband Code Division Multiple Access (WCDMA), the physical layer of Universal Mobile Telecommunication System (UMTS), has been performed as

  9. Easy web interfaces to IDL code for NSTX Data Analysis

    International Nuclear Information System (INIS)

    Davis, W.M.

    2012-01-01

    Highlights: ► Web interfaces to IDL code can be developed quickly. ► Dozens of Web Tools are used effectively on NSTX for Data Analysis. ► Web interfaces are easier to use than X-window applications. - Abstract: Reusing code is a well-known Software Engineering practice to substantially increase the efficiency of code production, as well as to reduce errors and debugging time. A variety of “Web Tools” for the analysis and display of raw and analyzed physics data are in use on NSTX [1], and new ones can be produced quickly from existing IDL [2] code. A Web Tool with only a few inputs, and which calls an IDL routine written in the proper style, can be created in less than an hour; more typical Web Tools with dozens of inputs, and the need for some adaptation of existing IDL code, can be working in a day or so. Efficiency is also increased for users of Web Tools because of the familiar interface of the web browser, and not needing X-windows, or accounts and passwords, when used within our firewall. Web Tools were adapted for use by PPPL physicists accessing EAST data stored in MDSplus with only a few man-weeks of effort; adapting to additional sites should now be even easier. An overview of Web Tools in use on NSTX, and a list of the most useful features, is also presented.

  10. On the feedback error compensation for adaptive modulation and coding scheme

    KAUST Repository

    Choi, Seyeong

    2011-11-25

    In this paper, we consider the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of perfect feedback channels. We quantify the performance of two joint AMDC schemes in the presence of feedback error, in terms of the average spectral efficiency, the average number of combined paths, and the average bit error rate. The benefit of feedback error compensation with adaptive combining is also quantified. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. Copyright (c) 2011 John Wiley & Sons, Ltd.

  11. Distributed Video Coding: Iterative Improvements

    DEFF Research Database (Denmark)

    Luong, Huynh Van

    Nowadays, emerging applications such as wireless visual sensor networks and wireless video surveillance are requiring lightweight video encoding with high coding efficiency and error-resilience. Distributed Video Coding (DVC) is a new coding paradigm which exploits the source statistics...... and noise modeling and also learn from the previous decoded Wyner-Ziv (WZ) frames, side information and noise learning (SING) is proposed. The SING scheme introduces an optical flow technique to compensate the weaknesses of the block based SI generation and also utilizes clustering of DCT blocks to capture...... cross band correlation and increase local adaptivity in noise modeling. During decoding, the updated information is used to iteratively reestimate the motion and reconstruction in the proposed motion and reconstruction reestimation (MORE) scheme. The MORE scheme not only reestimates the motion vectors...

  12. On locality of Generalized Reed-Muller codes over the broadcast erasure channel

    KAUST Repository

    Alloum, Amira; Lin, Sian Jheng; Al-Naffouri, Tareq Y.

    2016-01-01

    , and more specifically at the application layer where Rateless, LDPC, Reed Slomon codes and network coding schemes have been extensively studied, optimized and standardized in the past. Beyond reusing, extending or adapting existing application layer packet

  13. The National Shipbuilding Research Program. 1997 Ship Production Symposium, Paper Number 20: Design and Production of ANZAC Frigates for the RAN and RNZN: Progress Towards International Competitiveness

    Science.gov (United States)

    1997-04-01

    and New Zealand Industry Involvement ANZIP Australian and New Zealand Industry Program ASSC ANZAC Ship Support Centre ASTEC Australian Science...of performance measurement systems and benchmarking.” In September 1994, the Australian Science, Technology and Engineering Council ( ASTEC ) commenced...more in- depth analysis of the key issues facing Australia in a number of areas. Five Partnerships have been established, one of which is the ASTEC

  14. Design of ACM system based on non-greedy punctured LDPC codes

    Science.gov (United States)

    Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng

    2017-08-01

    In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.

  15. HETFIS: High-Energy Nucleon-Meson Transport Code with Fission

    International Nuclear Information System (INIS)

    Barish, J.; Gabriel, T.A.; Alsmiller, F.S.; Alsmiller, R.G. Jr.

    1981-07-01

    A model that includes fission for predicting particle production spectra from medium-energy nucleon and pion collisions with nuclei (Z greater than or equal to 91) has been incorporated into the nucleon-meson transport code, HETC. This report is primarily concerned with the programming aspects of HETFIS (High-Energy Nucleon-Meson Transport Code with Fission). A description of the program data and instructions for operating the code are given. HETFIS is written in FORTRAN IV for the IBM computers and is readily adaptable to other systems

  16. Industry Participation in Defence Research and Development,

    Science.gov (United States)

    1983-12-01

    Research and Development: Proposals for Additional Incentives. ASTEC , 1990. K . Interaction between Industry, Higher Education and Government Laboratories...Incentives for Innovation in Australian Industry. ASTEC , 1983. P. Bibliography. Distribution Document Control Data Sheet AWA I A& I 14l2 p/O)OIP (02... ASTEC and the Senate Committee on Science and the Environment. My Department is already preparing advice for me in this regard and I shall ask them to

  17. Integrated Control System Engineering Support.

    Science.gov (United States)

    1984-12-01

    Advanced Medium Range Air to Air Missile ASTEC Advanced Speech Technology Experimental Configuration BA Body Axis BCIU Bus Control Interface Unit BMU Bus...support nreeded to tie an ASTEC speech recognition system into the DIGISYN fJcility and support an FIGR experiment designed to investigate the voice...information passed to the PDP computer consisted of integers which represented words or phrases recognized by the ASTEC recognition system. An interface

  18. JPRS Report, Soviet Union, USA: Economics, Politics, Ideology, No. 11, November 1987

    Science.gov (United States)

    1988-05-18

    ASTEC ), established a year later, began oper- ating successfully. The revival of Soviet-American economic contacts was short-lived, however, and...conducted. The 10th annual meeting of ASTEC and the 9th session of the joint Soviet-American Trade Commission in June 1986 demonstrated the growing...irrigation equipment, and the chemical industry. JPRS-USA-88-005 18 May 1988 11 The American officials attending the ASTEC meeting displayed

  19. Adaptive Forward Error Correction for Energy Efficient Optical Transport Networks

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2013-01-01

    In this paper we propose a novel scheme for on the fly code rate adjustment for forward error correcting (FEC) codes on optical links. The proposed scheme makes it possible to adjust the code rate independently for each optical frame. This allows for seamless rate adaption based on the link state...

  20. Link adaptation algorithm for distributed coded transmissions in cooperative OFDMA systems

    DEFF Research Database (Denmark)

    Varga, Mihaly; Badiu, Mihai Alin; Bota, Vasile

    2015-01-01

    This paper proposes a link adaptation algorithm for cooperative transmissions in the down-link connection of an OFDMA-based wireless system. The algorithm aims at maximizing the spectral efficiency of a relay-aided communication link, while satisfying the block error rate constraints at both...... adaptation algorithm has linear complexity with the number of available resource blocks, while still provides a very good performance, as shown by simulation results....

  1. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding.

    Science.gov (United States)

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-04-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering--CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes--MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme.

  2. WWER core pattern enhancement using adaptive improved harmony search

    International Nuclear Information System (INIS)

    Nazari, T.; Aghaie, M.; Zolfaghari, A.; Minuchehr, A.; Norouzi, A.

    2013-01-01

    Highlights: ► The classical and improved harmony search algorithms are introduced. ► The advantage of IHS is demonstrated in Shekel's Foxholes. ► The CHS and IHS are compared with other Heuristic algorithms. ► The adaptive improved harmony search is applied for two cases. ► Two cases of WWER core are optimized in BOC FA pattern. - Abstract: The efficient operation and fuel management of PWRs are of utmost importance. Core performance analysis constitutes an essential phase in core fuel management optimization. Finding an optimum core arrangement for loading of fuel assemblies, FAs, in a nuclear core is a complex problem. In this paper, application of classical harmony search (HS) and adaptive improved harmony search (IHS) in loading pattern (LP) design, for pressurized water reactors, is described. In this analysis, finding the best core pattern, which attains maximum multiplication factor, k eff , by considering maximum allowable power picking factors (PPF) is the main objective. Therefore a HS based, LP optimization code is prepared and CITATION code which is a neutronic calculation code, applied to obtain effective multiplication factor, neutron fluxes and power density in desired cores. Using adaptive improved harmony search and neutronic code, generated LP optimization code, could be applicable for PWRs core with many numbers of FAs. In this work, at first step, HS and IHS efficiencies are compared with some other heuristic algorithms in Shekel's Foxholes problem and capability of the adaptive improved harmony search is demonstrated. Results show, efficient application of IHS. At second step, two WWER cases are studied and then IHS proffered improved core patterns with regard to mentioned objective functions.

  3. UEP Concepts in Modulation and Coding

    Directory of Open Access Journals (Sweden)

    Werner Henkel

    2010-01-01

    Full Text Available First unequal error protection (UEP proposals date back to the 1960's (Masnick and Wolf; 1967, but now with the introduction of scalable video, UEP develops to a key concept for the transport of multimedia data. The paper presents an overview of some new approaches realizing UEP properties in physical transport, especially multicarrier modulation, or with LDPC and Turbo codes. For multicarrier modulation, UEP bit-loading together with hierarchical modulation is described allowing for an arbitrary number of classes, arbitrary SNR margins between the classes, and arbitrary number of bits per class. In Turbo coding, pruning, as a counterpart of puncturing is presented for flexible bit-rate adaptations, including tables with optimized pruning patterns. Bit- and/or check-irregular LDPC codes may be designed to provide UEP to its code bits. However, irregular degree distributions alone do not ensure UEP, and other necessary properties of the parity-check matrix for providing UEP are also pointed out. Pruning is also the means for constructing variable-rate LDPC codes for UEP, especially controlling the check-node profile.

  4. Gyroaveraging operations using adaptive matrix operators

    Science.gov (United States)

    Dominski, Julien; Ku, Seung-Hoe; Chang, Choong-Seock

    2018-05-01

    A new adaptive scheme to be used in particle-in-cell codes for carrying out gyroaveraging operations with matrices is presented. This new scheme uses an intermediate velocity grid whose resolution is adapted to the local thermal Larmor radius. The charge density is computed by projecting marker weights in a field-line following manner while preserving the adiabatic magnetic moment μ. These choices permit to improve the accuracy of the gyroaveraging operations performed with matrices even when strong spatial variation of temperature and magnetic field is present. Accuracy of the scheme in different geometries from simple 2D slab geometry to realistic 3D toroidal equilibrium has been studied. A successful implementation in the gyrokinetic code XGC is presented in the delta-f limit.

  5. The application of finite volume methods for modelling three-dimensional incompressible flow on an unstructured mesh

    Science.gov (United States)

    Lonsdale, R. D.; Webster, R.

    This paper demonstrates the application of a simple finite volume approach to a finite element mesh, combining the economy of the former with the geometrical flexibility of the latter. The procedure is used to model a three-dimensional flow on a mesh of linear eight-node brick (hexahedra). Simulations are performed for a wide range of flow problems, some in excess of 94,000 nodes. The resulting computer code ASTEC that incorporates these procedures is described.

  6. Learning of spatio-temporal codes in a coupled oscillator system.

    Science.gov (United States)

    Orosz, Gábor; Ashwin, Peter; Townley, Stuart

    2009-07-01

    In this paper, we consider a learning strategy that allows one to transmit information between two coupled phase oscillator systems (called teaching and learning systems) via frequency adaptation. The dynamics of these systems can be modeled with reference to a number of partially synchronized cluster states and transitions between them. Forcing the teaching system by steady but spatially nonhomogeneous inputs produces cyclic sequences of transitions between the cluster states, that is, information about inputs is encoded via a "winnerless competition" process into spatio-temporal codes. The large variety of codes can be learned by the learning system that adapts its frequencies to those of the teaching system. We visualize the dynamics using "weighted order parameters (WOPs)" that are analogous to "local field potentials" in neural systems. Since spatio-temporal coding is a mechanism that appears in olfactory systems, the developed learning rules may help to extract information from these neural ensembles.

  7. Research and Design in Unified Coding Architecture for Smart Grids

    Directory of Open Access Journals (Sweden)

    Gang Han

    2013-09-01

    Full Text Available Standardized and sharing information platform is the foundation of the Smart Grids. In order to improve the dispatching center information integration of the power grids and achieve efficient data exchange, sharing and interoperability, a unified coding architecture is proposed. The architecture includes coding management layer, coding generation layer, information models layer and application system layer. Hierarchical design makes the whole coding architecture to adapt to different application environments, different interfaces, loosely coupled requirements, which can realize the integration model management function of the power grids. The life cycle and evaluation method of survival of unified coding architecture is proposed. It can ensure the stability and availability of the coding architecture. Finally, the development direction of coding technology of the Smart Grids in future is prospected.

  8. CACTI: free, open-source software for the sequential coding of behavioral interactions.

    Science.gov (United States)

    Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B

    2012-01-01

    The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.

  9. LDPC code decoding adapted to the precoded partial response magnetic recording channels

    International Nuclear Information System (INIS)

    Lee, Jun; Kim, Kyuyong; Lee, Jaejin; Yang, Gijoo

    2004-01-01

    We propose a signal processing technique using LDPC (low-density parity-check) code instead of PRML (partial response maximum likelihood) system for the longitudinal magnetic recording channel. The scheme is designed by the precoder admitting level detection at the receiver-end and modifying the likelihood function for LDPC code decoding. The scheme can be collaborated with other decoder for turbo-like systems. The proposed algorithm can contribute to improve the performance of the conventional turbo-like systems

  10. LDPC code decoding adapted to the precoded partial response magnetic recording channels

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jun E-mail: leejun28@sait.samsung.co.kr; Kim, Kyuyong; Lee, Jaejin; Yang, Gijoo

    2004-05-01

    We propose a signal processing technique using LDPC (low-density parity-check) code instead of PRML (partial response maximum likelihood) system for the longitudinal magnetic recording channel. The scheme is designed by the precoder admitting level detection at the receiver-end and modifying the likelihood function for LDPC code decoding. The scheme can be collaborated with other decoder for turbo-like systems. The proposed algorithm can contribute to improve the performance of the conventional turbo-like systems.

  11. KEWPIE: a dynamical cascade code for decaying exited compound nuclei

    OpenAIRE

    Bouriquet, Bertrand; Abe, Yasuhisa; Boilley, David

    2003-01-01

    A new dynamical cascade code for decaying hot nuclei is proposed and specially adapted to the synthesis of super-heavy nuclei. For such a case, the interesting channel is the tiny fraction that will decay through particles emission, thus the code avoids classical Monte-Carlo methods and proposes a new numerical scheme. The time dependence is explicitely taken into account in order to cope with the fact that fission decay rate might not be constant. The code allows to evaluate both statistical...

  12. Sustainable integration of EU research in severe accident phenomenology and management

    International Nuclear Information System (INIS)

    Van Dorsselaere, Jean-Pierre; Albiol, Thierry; Chaumont, Bernard; Haste, Tim; Journeau, Christophe; Meyer, Leonhard; Sehgal, Bal Raj; Schwinges, Bernd; Beraha, David; Annunziato, Alessandro; Zeyen, Roland

    2011-01-01

    Highlights: → The SARNET network gathers most worldwide actors involved in severe accident research. → It defines common research programmes for resolving the most important pending safety issues. → It optimises the use of the available European resources and constitutes sustainable research groups. → It disseminates the knowledge on severe accidents through education courses. → Knowledge produced is capitalized through physical models in the ASTEC simulation code. - Abstract: In order to optimise the use of the available means and to constitute sustainable research groups in the European Union, the Severe Accident Research NETwork of Excellence (SARNET) has gathered, between 2004 and 2008, 51 organizations representing most of the actors involved in severe accident (SA) research in Europe plus Canada. This project was co-funded by the European Commission (EC) under the 6th Euratom Framework Programme. Its objective was to resolve the most important pending issues for enhancing, in regard of SA, the safety of existing and future nuclear power plants (NPPs). SARNET tackled the fragmentation that existed between the national R and D programmes, in defining common research programmes and developing common computer codes and methodologies for safety assessment. The Joint Programme of Activities consisted in: -Implementing an advanced communication tool for accessing all project information, fostering exchange of information, and managing documents; - Harmonizing and re-orienting the research programmes, and defining new ones; -Analyzing the experimental results provided by research programmes in order to elaborate a common understanding of relevant phenomena; -Developing the ASTEC code (integral computer code used to predict the NPP behaviour during a postulated SA) by capitalizing in terms of physical models the knowledge produced within SARNET; - Developing scientific databases, in which the results of research experimental programmes are stored in a common

  13. Unequal Error Protected JPEG 2000 Broadcast Scheme with Progressive Fountain Codes

    OpenAIRE

    Chen, Zhao; Xu, Mai; Yin, Luiguo; Lu, Jianhua

    2012-01-01

    This paper proposes a novel scheme, based on progressive fountain codes, for broadcasting JPEG 2000 multimedia. In such a broadcast scheme, progressive resolution levels of images/video have been unequally protected when transmitted using the proposed progressive fountain codes. With progressive fountain codes applied in the broadcast scheme, the resolutions of images (JPEG 2000) or videos (MJPEG 2000) received by different users can be automatically adaptive to their channel qualities, i.e. ...

  14. A general purpose code for Monte Carlo simulations

    International Nuclear Information System (INIS)

    Wilcke, W.W.; Rochester Univ., NY

    1984-01-01

    A general-purpose computer code MONTHY has been written to perform Monte Carlo simulations of physical systems. To achieve a high degree of flexibility the code is organized like a general purpose computer, operating on a vector describing the time dependent state of the system under simulation. The instruction set of the 'computer' is defined by the user and is therefore adaptable to the particular problem studied. The organization of MONTHY allows iterative and conditional execution of operations. (orig.)

  15. Preparation of the TRANSURANUS code for TEMELIN NPP

    International Nuclear Information System (INIS)

    Klouzal, J.

    2011-01-01

    Since 2010 Temelin NPP started using TVSA-T fuel supplied by JSC TVEL. The transition process included implementation of several new core reload design codes. TRANSURANUS code was selected for the evaluation of the fuel rod thermomechanical performance. The adaptation and validation of the code was performed by Nuclear Research Institute Rez. TRANSURANUS code contains wide selection of alternative models for most of phenomena important for the fuel behaviour. It was therefore necessary to select, based on a comparison with experimental data, those most suitable for the modeling of TVSA-T fuel rods. In some cases, new models were implemented. Software tools and methodology for the evaluation of the proposed core reload design using TRANSURANUS code were also developed in NRI. The software tools include the interface to core physics code ANDREA and a set of scripts for an automated execution and processing of the computational runs. Independent confirmation of some of the vendor specified core reload design criteria was performed using TRANSURANUS. (authors)

  16. Subband Adaptive Array for DS-CDMA Mobile Radio

    Directory of Open Access Journals (Sweden)

    Tran Xuan Nam

    2004-01-01

    Full Text Available We propose a novel scheme of subband adaptive array (SBAA for direct-sequence code division multiple access (DS-CDMA. The scheme exploits the spreading code and pilot signal as the reference signal to estimate the propagation channel. Moreover, instead of combining the array outputs at each output tap using a synthesis filter and then despreading them, we despread directly the array outputs at each output tap by the desired user's code to save the synthesis filter. Although its configuration is far different from that of 2D RAKEs, the proposed scheme exhibits relatively equivalent performance of 2D RAKEs while having less computation load due to utilising adaptive signal processing in subbands. Simulation programs are carried out to explore the performance of the scheme and compare its performance with that of the standard 2D RAKE.

  17. CAreDroid: Adaptation Framework for Android Context-Aware Applications.

    Science.gov (United States)

    Elmalaki, Salma; Wanner, Lucas; Srivastava, Mani

    2015-09-01

    Context-awareness is the ability of software systems to sense and adapt to their physical environment. Many contemporary mobile applications adapt to changing locations, connectivity states, available computational and energy resources, and proximity to other users and devices. Nevertheless, there is little systematic support for context-awareness in contemporary mobile operating systems. Because of this, application developers must build their own context-awareness adaptation engines, dealing directly with sensors and polluting application code with complex adaptation decisions. In this paper, we introduce CAreDroid, which is a framework that is designed to decouple the application logic from the complex adaptation decisions in Android context-aware applications. In this framework, developers are required- only-to focus on the application logic by providing a list of methods that are sensitive to certain contexts along with the permissible operating ranges under those contexts. At run time, CAreDroid monitors the context of the physical environment and intercepts calls to sensitive methods, activating only the blocks of code that best fit the current physical context. CAreDroid is implemented as part of the Android runtime system. By pushing context monitoring and adaptation into the runtime system, CAreDroid eases the development of context-aware applications and increases their efficiency. In particular, case study applications implemented using CAre-Droid are shown to have: (1) at least half lines of code fewer and (2) at least 10× more efficient in execution time compared to equivalent context-aware applications that use only standard Android APIs.

  18. Congressional Submission RDT&E Descriptive Summaries, FY 1994

    Science.gov (United States)

    1993-04-01

    U) Extremely High Frequency (EHF) lower cost, communication technologies ( ASTEC ) will be developed to support the ASD/C 3 1 MILSATCOM modernization...restructured and formalized into the ASTEC , CAMEO and IMPACT Programs. F. (U) PROGRAM DOCUMENTATION: * (U) U.S. Air Force/DARPA MOA dated 1988 * (U) U.S...Army/DARPA MOA dated 1990 * (U) SDIO/DARPA MOA dated 1990 * (U) DARPA/U.S. Air Force TAOS MOA dated 1992 * (U) DARPA/Joint Services ASTEC MOA (in

  19. JPRS Report, Soviet Union, World Economy & International Relations, No. 1, January 1989.

    Science.gov (United States)

    1989-05-12

    Council ( ASTEC ). In his opinion, it is necessary before approaching a solution of problems of bilateral relations to get a clear idea of one’s partners...engineering products constitute a very negligible proportion. The task of the ASTEC is to expand bilateral trade and promote its development. But...that its further elaboration will not be long in coming. It was decided, given the direct assistance of the ASTEC , to create a "strike" group of

  20. Theoretical atomic physics code development I: CATS: Cowan Atomic Structure Code

    International Nuclear Information System (INIS)

    Abdallah, J. Jr.; Clark, R.E.H.; Cowan, R.D.

    1988-12-01

    An adaptation of R.D. Cowan's Atomic Structure program, CATS, has been developed as part of the Theoretical Atomic Physics (TAPS) code development effort at Los Alamos. CATS has been designed to be easy to run and to produce data files that can interface with other programs easily. The CATS produced data files currently include wave functions, energy levels, oscillator strengths, plane-wave-Born electron-ion collision strengths, photoionization cross sections, and a variety of other quantities. This paper describes the use of CATS. 10 refs

  1. Predictability of iodine chemistry in the containment of a nuclear power plant under hypothetical severe accident conditions

    Energy Technology Data Exchange (ETDEWEB)

    Herranz, L.E.; Vela-Garcia, M.; Fontanet, J. [Unit of Nuclear Safety Research, CIEMAT, Madrid (Spain)

    2007-07-01

    One of the areas of top interest in the arena of severe accidents to get an accurate prediction of Source Term is Iodine Chemistry. In this paper an assessment of the current capability of MELCOR and ASTEC to predict iodine chemistry within containment in case of a postulated severe accident has been carried out. The experiments FPT1 and FPT2 of the PHEBUS-FP project have been used for comparisons, since they were carried out under rather different containment conditions during the chemistry phase (subcooled vs. saturated sump or acid vs. alkaline pH), which makes them very suitable to assess the current modeling capability of in-containment iodine chemistry models. The results obtained indicate that, even though, both integral codes have specific areas related to iodine chemistry that should be further developed and that their approach to the matter is drastically different, at present ASTEC-IODE allows for a more comprehensive simulation of the containment iodine chemistry. More importantly, lack of maturity of these codes would potentially maximize the so-called user-effect, so that it would be highly recommendable to perform sensitivity studies around iodine chemistry aspects when calculating Source Term scenarios. Key aspects needed of further research are: gaseous iodine chemistry (absent in MELCOR), organic iodine chemistry and adsorption/desorption on/from containment surfaces. (authors)

  2. Layer-based buffer aware rate adaptation design for SHVC video streaming

    Science.gov (United States)

    Gudumasu, Srinivas; Hamza, Ahmed; Asbun, Eduardo; He, Yong; Ye, Yan

    2016-09-01

    This paper proposes a layer based buffer aware rate adaptation design which is able to avoid abrupt video quality fluctuation, reduce re-buffering latency and improve bandwidth utilization when compared to a conventional simulcast based adaptive streaming system. The proposed adaptation design schedules DASH segment requests based on the estimated bandwidth, dependencies among video layers and layer buffer fullness. Scalable HEVC video coding is the latest state-of-art video coding technique that can alleviate various issues caused by simulcast based adaptive video streaming. With scalable coded video streams, the video is encoded once into a number of layers representing different qualities and/or resolutions: a base layer (BL) and one or more enhancement layers (EL), each incrementally enhancing the quality of the lower layers. Such layer based coding structure allows fine granularity rate adaptation for the video streaming applications. Two video streaming use cases are presented in this paper. The first use case is to stream HD SHVC video over a wireless network where available bandwidth varies, and the performance comparison between proposed layer-based streaming approach and conventional simulcast streaming approach is provided. The second use case is to stream 4K/UHD SHVC video over a hybrid access network that consists of a 5G millimeter wave high-speed wireless link and a conventional wired or WiFi network. The simulation results verify that the proposed layer based rate adaptation approach is able to utilize the bandwidth more efficiently. As a result, a more consistent viewing experience with higher quality video content and minimal video quality fluctuations can be presented to the user.

  3. WWER core pattern enhancement using adaptive improved harmony search

    Energy Technology Data Exchange (ETDEWEB)

    Nazari, T. [Nuclear Engineering Department, Shahid Beheshti University, G.C., P.O. Box 1983963113, Tehran (Iran, Islamic Republic of); Aghaie, M., E-mail: M_Aghaie@sbu.ac.ir [Nuclear Engineering Department, Shahid Beheshti University, G.C., P.O. Box 1983963113, Tehran (Iran, Islamic Republic of); Zolfaghari, A.; Minuchehr, A.; Norouzi, A. [Nuclear Engineering Department, Shahid Beheshti University, G.C., P.O. Box 1983963113, Tehran (Iran, Islamic Republic of)

    2013-01-15

    Highlights: Black-Right-Pointing-Pointer The classical and improved harmony search algorithms are introduced. Black-Right-Pointing-Pointer The advantage of IHS is demonstrated in Shekel's Foxholes. Black-Right-Pointing-Pointer The CHS and IHS are compared with other Heuristic algorithms. Black-Right-Pointing-Pointer The adaptive improved harmony search is applied for two cases. Black-Right-Pointing-Pointer Two cases of WWER core are optimized in BOC FA pattern. - Abstract: The efficient operation and fuel management of PWRs are of utmost importance. Core performance analysis constitutes an essential phase in core fuel management optimization. Finding an optimum core arrangement for loading of fuel assemblies, FAs, in a nuclear core is a complex problem. In this paper, application of classical harmony search (HS) and adaptive improved harmony search (IHS) in loading pattern (LP) design, for pressurized water reactors, is described. In this analysis, finding the best core pattern, which attains maximum multiplication factor, k{sub eff}, by considering maximum allowable power picking factors (PPF) is the main objective. Therefore a HS based, LP optimization code is prepared and CITATION code which is a neutronic calculation code, applied to obtain effective multiplication factor, neutron fluxes and power density in desired cores. Using adaptive improved harmony search and neutronic code, generated LP optimization code, could be applicable for PWRs core with many numbers of FAs. In this work, at first step, HS and IHS efficiencies are compared with some other heuristic algorithms in Shekel's Foxholes problem and capability of the adaptive improved harmony search is demonstrated. Results show, efficient application of IHS. At second step, two WWER cases are studied and then IHS proffered improved core patterns with regard to mentioned objective functions.

  4. An Adaptation of the HELIOS/MASTER Code System to the Analysis of VHTR Cores

    International Nuclear Information System (INIS)

    Noh, Jae Man; Lee, Hyun Chul; Kim, Kang Seog; Kim, Yong Hee

    2006-01-01

    KAERI is developing a new computer code system for an analysis of VHTR cores based on the existing HELIOS/MASTER code system which was originally developed for a LWR core analysis. In the VHTR reactor physics, there are several unique neutronic characteristics that cannot be handled easily by the conventional computer code system applied for the LWR core analysis. Typical examples of such characteristics are a double heterogeneity problem due to the particulate fuels, the effects of a spectrum shift and a thermal up-scattering due to the graphite moderator, and a strong fuel/reflector interaction, etc. In order to facilitate an easy treatment of such characteristics, we developed some methodologies for the HELIOS/MASTER code system and tested their applicability to the VHTR core analysis

  5. Overview of IRSN R and D activities on severe accidents

    International Nuclear Information System (INIS)

    Chaumont, B.; Raimond, E.; Dorsselaere, J.P. Van; Simondi-Teisseire, B.

    2009-01-01

    IRSN activities on SA include safety assessment for existing and future nuclear power plants and experimental reactors, level 2 Probabilistic Safety Assessment (PSA), development of simulation codes and realization of experiments. R and D concerns the following phenomena: fission products behaviour, in particular iodine and ruthenium chemistry (CHIP and EPICUR experiments); core melting and reflooding of damaged cores (MOZART, BECARRE and PEARL experiments); Molten-Core-Concrete-Interaction; thermo-mechanical behaviour of the reactor coolant system and the vessel; direct containment heating, steam explosion; hydrogen distribution and combustion in containment. Other activities concern integration of knowledge into SA codes, validation of the codes, and benchmarking activities. The main codes developed at IRSN are the ASTEC integral code and the detailed codes such as M3CD for fuel coolant interaction and TONUS for hydrogen risk assessment. One essential application of the SA codes is the systematic analysis of the different possible scenarios in the frame of the development of level 2 PSA. (authors)

  6. Partial Adaptation of Obtained and Observed Value Signals Preserves Information about Gains and Losses.

    Science.gov (United States)

    Burke, Christopher J; Baddeley, Michelle; Tobler, Philippe N; Schultz, Wolfram

    2016-09-28

    Given that the range of rewarding and punishing outcomes of actions is large but neural coding capacity is limited, efficient processing of outcomes by the brain is necessary. One mechanism to increase efficiency is to rescale neural output to the range of outcomes expected in the current context, and process only experienced deviations from this expectation. However, this mechanism comes at the cost of not being able to discriminate between unexpectedly low losses when times are bad versus unexpectedly high gains when times are good. Thus, too much adaptation would result in disregarding information about the nature and absolute magnitude of outcomes, preventing learning about the longer-term value structure of the environment. Here we investigate the degree of adaptation in outcome coding brain regions in humans, for directly experienced outcomes and observed outcomes. We scanned participants while they performed a social learning task in gain and loss blocks. Multivariate pattern analysis showed two distinct networks of brain regions adapt to the most likely outcomes within a block. Frontostriatal areas adapted to directly experienced outcomes, whereas lateral frontal and temporoparietal regions adapted to observed social outcomes. Critically, in both cases, adaptation was incomplete and information about whether the outcomes arose in a gain block or a loss block was retained. Univariate analysis confirmed incomplete adaptive coding in these regions but also detected nonadapting outcome signals. Thus, although neural areas rescale their responses to outcomes for efficient coding, they adapt incompletely and keep track of the longer-term incentives available in the environment. Optimal value-based choice requires that the brain precisely and efficiently represents positive and negative outcomes. One way to increase efficiency is to adapt responding to the most likely outcomes in a given context. However, too strong adaptation would result in loss of precise

  7. Simulation and Rapid Prototyping of Adaptive Control Systems using the Adaptive Blockset for Simulink

    DEFF Research Database (Denmark)

    Ravn, Ole

    1998-01-01

    The paper describes the design considerations and implementational aspects of the Adaptive Blockset for Simulink which has been developed in a prototype implementation. The concept behind the Adaptive Blockset for Simulink is to bridge the gap between simulation and prototype controller implement...... design, controller and state variable filter.The use of the Adaptive Blockset is demonstrated using a simple laboratory setup. Both the use of the blockset for simulation and for rapid prototyping of a real-time controller are shown.......The paper describes the design considerations and implementational aspects of the Adaptive Blockset for Simulink which has been developed in a prototype implementation. The concept behind the Adaptive Blockset for Simulink is to bridge the gap between simulation and prototype controller...... implementation. This is done using the code generation capabilities of Real Time Workshop in combination with C s-function blocks for adaptive control in Simulink. In the paper the design of each group of blocks normally found in adaptive controllers is outlined. The block types are, identification, controller...

  8. A CABAC codec of H.264AVC with secure arithmetic coding

    Science.gov (United States)

    Neji, Nihel; Jridi, Maher; Alfalou, Ayman; Masmoudi, Nouri

    2013-02-01

    This paper presents an optimized H.264/AVC coding system for HDTV displays based on a typical flow with high coding efficiency and statics adaptivity features. For high quality streaming, the codec uses a Binary Arithmetic Encoding/Decoding algorithm with high complexity and a JVCE (Joint Video compression and encryption) scheme. In fact, particular attention is given to simultaneous compression and encryption applications to gain security without compromising the speed of transactions [1]. The proposed design allows us to encrypt the information using a pseudo-random number generator (PRNG). Thus we achieved the two operations (compression and encryption) simultaneously and in a dependent manner which is a novelty in this kind of architecture. Moreover, we investigated the hardware implementation of CABAC (Context-based adaptive Binary Arithmetic Coding) codec. The proposed architecture is based on optimized binarizer/de-binarizer to handle significant pixel rates videos with low cost and high performance for most frequent SEs. This was checked using HD video frames. The obtained synthesis results using an FPGA (Xilinx's ISE) show that our design is relevant to code main profile video stream.

  9. Sequencing of 50 human exomes reveals adaptation to high altitude

    DEFF Research Database (Denmark)

    Yi, Xin; Liang, Yu; Huerta-Sanchez, Emilia

    2010-01-01

    Residents of the Tibetan Plateau show heritable adaptations to extreme altitude. We sequenced 50 exomes of ethnic Tibetans, encompassing coding sequences of 92% of human genes, with an average coverage of 18x per individual. Genes showing population-specific allele frequency changes, which repres...... in genetic adaptation to high altitude.......Residents of the Tibetan Plateau show heritable adaptations to extreme altitude. We sequenced 50 exomes of ethnic Tibetans, encompassing coding sequences of 92% of human genes, with an average coverage of 18x per individual. Genes showing population-specific allele frequency changes, which...... represent strong candidates for altitude adaptation, were identified. The strongest signal of natural selection came from endothelial Per-Arnt-Sim (PAS) domain protein 1 (EPAS1), a transcription factor involved in response to hypoxia. One single-nucleotide polymorphism (SNP) at EPAS1 shows a 78% frequency...

  10. The PASC-3 code system and the UNIPASC environment

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Oppe, J.; Oudshoorn, H.

    1991-08-01

    A brief description is given of the PASC-3 (Petten-AMPX-SCALE) Reactor Physics code system and its associated UNIPASC work environment. The PASC-3 code system is used for criticality and reactor calculations and consists of a selection from the Oak Ridge National Laboratory AMPX-SCALE-3 code collection complemented with a number of additional codes and nuclear data bases. The original codes have been adapted to run under the UNIX operating system. The recommended nuclear data base is a complete 219 group cross section library derived from JEF-1 of which some benchmark results are presented. By the addition of the UNIPASC work environment the usage of the code system is greatly simplified, Complex chains of programs can easily be coupled together to form a single job. In addition, the model parameters can be represented by variables instead of literal values which enhances the readability and may improve the integrity of the code inputs. (author). 8 refs.; 6 figs.; 1 tab

  11. Critical roles for a genetic code alteration in the evolution of the genus Candida.

    Science.gov (United States)

    Silva, Raquel M; Paredes, João A; Moura, Gabriela R; Manadas, Bruno; Lima-Costa, Tatiana; Rocha, Rita; Miranda, Isabel; Gomes, Ana C; Koerkamp, Marian J G; Perrot, Michel; Holstege, Frank C P; Boucherie, Hélian; Santos, Manuel A S

    2007-10-31

    During the last 30 years, several alterations to the standard genetic code have been discovered in various bacterial and eukaryotic species. Sense and nonsense codons have been reassigned or reprogrammed to expand the genetic code to selenocysteine and pyrrolysine. These discoveries highlight unexpected flexibility in the genetic code, but do not elucidate how the organisms survived the proteome chaos generated by codon identity redefinition. In order to shed new light on this question, we have reconstructed a Candida genetic code alteration in Saccharomyces cerevisiae and used a combination of DNA microarrays, proteomics and genetics approaches to evaluate its impact on gene expression, adaptation and sexual reproduction. This genetic manipulation blocked mating, locked yeast in a diploid state, remodelled gene expression and created stress cross-protection that generated adaptive advantages under environmental challenging conditions. This study highlights unanticipated roles for codon identity redefinition during the evolution of the genus Candida, and strongly suggests that genetic code alterations create genetic barriers that speed up speciation.

  12. Development of a numerical tool for safety assessment and emergency management of experimental reactors

    International Nuclear Information System (INIS)

    Maas, L.; Beuter, A.; Seropian, C.

    2010-01-01

    The Institute of Radiological Protection and Nuclear Safety (IRSN) acts as technical support to French public authorities. Among its duties, one important item is to provide help for emergency situations management in case of an accident occurring in a French nuclear facility. In this framework, IRSN develops and applies numerical tools dealing with containment management issues. Up to now IRSN has not got any specific tool for experimental reactors. Accordingly, it has been then decided to extend the ASTEC code, devoted to severe accident scenarios for Pressurized Water Reactors, to this kind of reactors. This lumped-parameter code, co-developed by IRSN and GRS (Germany), covers the entire phenomenology from the initiating event up to fission products release outside the reactor containment, except for the steam explosion and the mechanical integrity of the containment. A first application to experimental reactors was carried out to assess the High Flux Reactor (HFR) operator's improvement proposal concerning the containment management during accidental situations. This reactor, located in Grenoble (France), is composed of a double wall containment with a pressurized containment annulus preventing any direct leakage into the environment. Until now, in case of severe accidents (mainly core melting in pool, explosive reactivity accident called BORAX), the HFR emergency management consisted in isolating the containment building in the early stage of the accident, to prevent any radioactive products release to the environment. The operator decided to improve this containment management during accidental situations by using an air filtering venting system able to maintain a slight sub-atmospheric pressure in the reactor building. The operator's demonstration of the efficiency of this new system is mainly based on containment pressure evaluations during accidental transients. IRSN assessed these calculations through ASTEC calculations. Finally, a global agreement was

  13. NALAP: an LMFBR system transient code

    International Nuclear Information System (INIS)

    Martin, B.A.; Agrawal, A.K.; Albright, D.C.; Epel, L.G.; Maise, G.

    1975-07-01

    NALAP is a LMFBR system transient code. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic response of sodium cooled fast breeder reactors when subjected to postulated accidents such as a massive pipe break as well as a variety of other upset conditions that do not disrupt the system geometry. Various components of the plant are represented by control volumes. These control volumes are connected by junctions some of which may be leak or fill junctions. The fluid flow equations are modeled as compressible, single-stream flow with momentum flux in one dimension. The transient response is computed by integrating the thermal-hydraulic conservation equations from user-initialized operating conditions by an implicit numerical scheme. Point kinetics approximation is used to represent the time dependent heat generation in the reactor core

  14. BOA, Beam Optics Analyzer A Particle-In-Cell Code

    International Nuclear Information System (INIS)

    Bui, Thuc

    2007-01-01

    The program was tasked with implementing time dependent analysis of charges particles into an existing finite element code with adaptive meshing, called Beam Optics Analyzer (BOA). BOA was initially funded by a DOE Phase II program to use the finite element method with adaptive meshing to track particles in unstructured meshes. It uses modern programming techniques, state-of-the-art data structures, so that new methods, features and capabilities are easily added and maintained. This Phase II program was funded to implement plasma simulations in BOA and extend its capabilities to model thermal electrons, secondary emissions, self magnetic field and implement a more comprehensive post-processing and feature-rich GUI. The program was successful in implementing thermal electrons, secondary emissions, and self magnetic field calculations. The BOA GUI was also upgraded significantly, and CCR is receiving interest from the microwave tube and semiconductor equipment industry for the code. Implementation of PIC analysis was partially successful. Computational resource requirements for modeling more than 2000 particles begin to exceed the capability of most readily available computers. Modern plasma analysis typically requires modeling of approximately 2 million particles or more. The problem is that tracking many particles in an unstructured mesh that is adapting becomes inefficient. In particular memory requirements become excessive. This probably makes particle tracking in unstructured meshes currently unfeasible with commonly available computer resources. Consequently, Calabazas Creek Research, Inc. is exploring hybrid codes where the electromagnetic fields are solved on the unstructured, adaptive mesh while particles are tracked on a fixed mesh. Efficient interpolation routines should be able to transfer information between nodes of the two meshes. If successfully developed, this could provide high accuracy and reasonable computational efficiency.

  15. A first accident simulation for Angra-1 power plant using the ALMOD computer code

    International Nuclear Information System (INIS)

    Camargo, C.T.M.

    1981-02-01

    The acquisition of the Almod computer code from GRS-Munich to CNEN has permited doing calculations of transients in PWR nuclear power plants, in which doesn't occur loss of coolant. The implementation of the german computer code Almod and its application in the calculation of Angra-1, a nuclear power plant different from the KWU power plants, demanded study and models adaptation; and due to economic reasons simplifications and optimizations were necessary. The first results define the analytical potential of the computer code, confirm the adequacy of the adaptations done and provide relevant conclusions about the Angra-1 safety analysis, showing at the same time areas in which the model can be applied or simply improved. (Author) [pt

  16. KEWPIE: A dynamical cascade code for decaying exited compound nuclei

    Science.gov (United States)

    Bouriquet, Bertrand; Abe, Yasuhisa; Boilley, David

    2004-05-01

    A new dynamical cascade code for decaying hot nuclei is proposed and specially adapted to the synthesis of super-heavy nuclei. For such a case, the interesting channel is of the tiny fraction that will decay through particles emission, thus the code avoids classical Monte-Carlo methods and proposes a new numerical scheme. The time dependence is explicitely taken into account in order to cope with the fact that fission decay rate might not be constant. The code allows to evaluate both statistical and dynamical observables. Results are successfully compared to experimental data.

  17. Modeling for deformable mirrors and the adaptive optics optimization program

    International Nuclear Information System (INIS)

    Henesian, M.A.; Haney, S.W.; Trenholme, J.B.; Thomas, M.

    1997-01-01

    We discuss aspects of adaptive optics optimization for large fusion laser systems such as the 192-arm National Ignition Facility (NIF) at LLNL. By way of example, we considered the discrete actuator deformable mirror and Hartmann sensor system used on the Beamlet laser. Beamlet is a single-aperture prototype of the 11-0-5 slab amplifier design for NIF, and so we expect similar optical distortion levels and deformable mirror correction requirements. We are now in the process of developing a numerically efficient object oriented C++ language implementation of our adaptive optics and wavefront sensor code, but this code is not yet operational. Results are based instead on the prototype algorithms, coded-up in an interpreted array processing computer language

  18. Severe accidents in nuclear reactors

    International Nuclear Information System (INIS)

    Ohai, Dumitru; Dumitrescu, Iulia; Tunaru, Mariana

    2004-01-01

    The likelihood of accidents leading to core meltdown in nuclear reactors is low. The consequences of such an event are but so severe that developing and implementing of adequate measures for preventing or diminishing the consequences of such events are of paramount importance. The analysis of major accidents requires sophisticated computation codes but necessary are also relevant experiments for checking the accuracy of the predictions and capability of these codes. In this paper an overview of the severe accidents worldwide with definitions, computation codes and relating experiments is presented. The experimental research activity of severe accidents was conducted in INR Pitesti since 2003, when the Institute jointed the SARNET Excellence Network. The INR activity within SARNET consists in studying scenarios of severe accidents by means of ASTEC and RELAP/SCDAP codes and conducting bench-scale experiments

  19. Adaptation and selective information transmission in the cricket auditory neuron AN2.

    Directory of Open Access Journals (Sweden)

    Klaus Wimmer

    Full Text Available Sensory systems adapt their neural code to changes in the sensory environment, often on multiple time scales. Here, we report a new form of adaptation in a first-order auditory interneuron (AN2 of crickets. We characterize the response of the AN2 neuron to amplitude-modulated sound stimuli and find that adaptation shifts the stimulus-response curves toward higher stimulus intensities, with a time constant of 1.5 s for adaptation and recovery. The spike responses were thus reduced for low-intensity sounds. We then address the question whether adaptation leads to an improvement of the signal's representation and compare the experimental results with the predictions of two competing hypotheses: infomax, which predicts that information conveyed about the entire signal range should be maximized, and selective coding, which predicts that "foreground" signals should be enhanced while "background" signals should be selectively suppressed. We test how adaptation changes the input-response curve when presenting signals with two or three peaks in their amplitude distributions, for which selective coding and infomax predict conflicting changes. By means of Bayesian data analysis, we quantify the shifts of the measured response curves and also find a slight reduction of their slopes. These decreases in slopes are smaller, and the absolute response thresholds are higher than those predicted by infomax. Most remarkably, and in contrast to the infomax principle, adaptation actually reduces the amount of encoded information when considering the whole range of input signals. The response curve changes are also not consistent with the selective coding hypothesis, because the amount of information conveyed about the loudest part of the signal does not increase as predicted but remains nearly constant. Less information is transmitted about signals with lower intensity.

  20. Blind Recognition of Binary BCH Codes for Cognitive Radios

    Directory of Open Access Journals (Sweden)

    Jing Zhou

    2016-01-01

    Full Text Available A novel algorithm of blind recognition of Bose-Chaudhuri-Hocquenghem (BCH codes is proposed to solve the problem of Adaptive Coding and Modulation (ACM in cognitive radio systems. The recognition algorithm is based on soft decision situations. The code length is firstly estimated by comparing the Log-Likelihood Ratios (LLRs of the syndromes, which are obtained according to the minimum binary parity check matrixes of different primitive polynomials. After that, by comparing the LLRs of different minimum polynomials, the code roots and generator polynomial are reconstructed. When comparing with some previous approaches, our algorithm yields better performance even on very low Signal-Noise-Ratios (SNRs with lower calculation complexity. Simulation results show the efficiency of the proposed algorithm.

  1. Building codes: An often overlooked determinant of health.

    Science.gov (United States)

    Chauvin, James; Pauls, Jake; Strobl, Linda

    2016-05-01

    Although the vast majority of the world's population spends most of their time in buildings, building codes are not often thought of as 'determinants of health'. The standards that govern the design, construction, and use of buildings affect our health, security, safety, and well-being. This is true for dwellings, schools, and universities, shopping centers, places of recreation, places of worship, health-care facilities, and workplaces. We urge proactive engagement by the global public health community in developing these codes, and in the design and implementation of health protection and health promotion activities intended to reduce the risk of injury, disability, and death, particularly when due to poor building code adoption/adaption, application, and enforcement.

  2. Adaptive Multi-Layered Space-Time Block Coded Systems in Wireless Environments

    KAUST Repository

    Al-Ghadhban, Samir

    2014-01-01

    © 2014, Springer Science+Business Media New York. Multi-layered space-time block coded systems (MLSTBC) strike a balance between spatial multiplexing and transmit diversity. In this paper, we analyze the block error rate performance of MLSTBC

  3. Hybrid Video Coding Based on Bidimensional Matching Pursuit

    Directory of Open Access Journals (Sweden)

    Lorenzo Granai

    2004-12-01

    Full Text Available Hybrid video coding combines together two stages: first, motion estimation and compensation predict each frame from the neighboring frames, then the prediction error is coded, reducing the correlation in the spatial domain. In this work, we focus on the latter stage, presenting a scheme that profits from some of the features introduced by the standard H.264/AVC for motion estimation and replaces the transform in the spatial domain. The prediction error is so coded using the matching pursuit algorithm which decomposes the signal over an appositely designed bidimensional, anisotropic, redundant dictionary. Comparisons are made among the proposed technique, H.264, and a DCT-based coding scheme. Moreover, we introduce fast techniques for atom selection, which exploit the spatial localization of the atoms. An adaptive coding scheme aimed at optimizing the resource allocation is also presented, together with a rate-distortion study for the matching pursuit algorithm. Results show that the proposed scheme outperforms the standard DCT, especially at very low bit rates.

  4. MORSE - E. A new version of the MORSE code

    International Nuclear Information System (INIS)

    Ponti, C.; Heusden, R. van.

    1974-12-01

    This report describes a version of the MORSE code which has been written to facilitate the practical use of this programme. MORSE-E is a ready-to-use version that does not require particular programming efforts to adapt the code to the problem to be solved. It treats source volumes of different geometrical shapes. MORSE-E calculates the flux of particles as the sum of the paths travelled within a given volume; the corresponding relative errors are also provided

  5. Edge-preserving Intra Depth Coding based on Context-coding and H.264/AVC

    DEFF Research Database (Denmark)

    Zamarin, Marco; Salmistraro, Matteo; Forchhammer, Søren

    2013-01-01

    Depth map coding plays a crucial role in 3D Video communication systems based on the “Multi-view Video plus Depth” representation as view synthesis performance is strongly affected by the accuracy of depth information, especially at edges in the depth map image. In this paper an efficient algorithm...... for edge-preserving intra depth compression based on H.264/AVC is presented. The proposed method introduces a new Intra mode specifically targeted to depth macroblocks with arbitrarily shaped edges, which are typically not efficiently represented by DCT. Edge macroblocks are partitioned into two regions...... each approximated by a flat surface. Edge information is encoded by means of contextcoding with an adaptive template. As a novel element, the proposed method allows exploiting the edge structure of previously encoded edge macroblocks during the context-coding step to further increase compression...

  6. The new Italian code of medical ethics.

    Science.gov (United States)

    Fineschi, V; Turillazzi, E; Cateni, C

    1997-01-01

    In June 1995, the Italian code of medical ethics was revised in order that its principles should reflect the ever-changing relationship between the medical profession and society and between physicians and patients. The updated code is also a response to new ethical problems created by scientific progress; the discussion of such problems often shows up a need for better understanding on the part of the medical profession itself. Medical deontology is defined as the discipline for the study of norms of conduct for the health care professions, including moral and legal norms as well as those pertaining more strictly to professional performance. The aim of deontology is therefore, the in-depth investigation and revision of the code of medical ethics. It is in the light of this conceptual definition that one should interpret a review of the different codes which have attempted, throughout the various periods of Italy's recent history, to adapt ethical norms to particular social and health care climates. PMID:9279746

  7. Models and applications of the UEDGE code

    International Nuclear Information System (INIS)

    Rensink, M.E.; Knoll, D.A.; Porter, G.D.; Rognlien, T.D.; Smith, G.R.; Wising, F.

    1996-09-01

    The transport of particles and energy from the core of a tokamak to nearby material surfaces is an important problem for understanding present experiments and for designing reactor-grade devices. A number of fluid transport codes have been developed to model the plasma in the edge and scrape-off layer (SOL) regions. This report will focus on recent model improvements and illustrative results from the UEDGE code. Some geometric and mesh considerations are introduced, followed by a general description of the plasma and neutral fluid models. A few comments on computational issues are given and then two important applications are illustrated concerning benchmarking and the ITER radiative divertor. Finally, we report on some recent work to improve the models in UEDGE by coupling to a Monte Carlo neutrals code and by utilizing an adaptive grid

  8. Sustainable integration of EU research in severe accident phenomenology and management (SARNET2 project)

    International Nuclear Information System (INIS)

    Van Dorsselaere, Jean-Pierre; Albiol, Thierry; Chaumont, Bernard; Haste, Tim; Journeau, Christophe; Meyer, Leonhard; Sehgal, Bal Raj; Schwinges, Bernd; Beraha, David; Annunziato, Alessandro; Zeyen, Roland

    2010-01-01

    In order to optimise the use of the available means and to constitute sustainable research groups in the European Union, the Severe Accident Research NETwork of Excellence (SARNET) has gathered 51 organisations representing most of the actors involved in Severe Accident (SA) research in Europe plus Canada. This project was co-funded by the European Commission (EC) under the 6th Euratom Framework Programme. Its objective was to resolve the most important pending issues for enhancing, in regard of SA, the safety of existing and future Nuclear Power Plants (NPPs). SARNET tackled the fragmentation that existed between the national R and D programmes, in defining common research programmes and developing common computer codes for safety assessment. The Joint Programme of Activities consisted in: (i) Implementing an advanced communication tool for accessing all project information, fostering exchange of information, and managing documents; (ii) Harmonizing and re-orienting the research programmes, and defining new ones; (iii) Analyzing the experimental results provided by research programmes in order to elaborate a common understanding of relevant phenomena; (iv) Developing the ASTEC code (integral computer code used to predict the NPP behaviour during a postulated SA) by integrating the knowledge produced within SARNET; (v) Developing Scientific Databases, in which the results of research experimental programmes are stored in a common format; (vi) Developing a common methodology for Probabilistic Safety Assessment of NPPs; (vii) Developing short courses and writing a text book on Severe Accidents for students and researchers; (viii) Promoting personnel mobility amongst various European organizations. This paper presents the major achievements after four and a half years of operation of the network, in terms of knowledge gained, of improvements of the ASTEC reference code, of dissemination of results and of integration of the research programmes conducted by the various

  9. Multi-optimization Criteria-based Robot Behavioral Adaptability and Motion Planning

    International Nuclear Information System (INIS)

    Pin, Francois G.

    2002-01-01

    Robotic tasks are typically defined in Task Space (e.g., the 3-D World), whereas robots are controlled in Joint Space (motors). The transformation from Task Space to Joint Space must consider the task objectives (e.g., high precision, strength optimization, torque optimization), the task constraints (e.g., obstacles, joint limits, non-holonomic constraints, contact or tool task constraints), and the robot kinematics configuration (e.g., tools, type of joints, mobile platform, manipulator, modular additions, locked joints). Commercially available robots are optimized for a specific set of tasks, objectives and constraints and, therefore, their control codes are extremely specific to a particular set of conditions. Thus, there exist a multiplicity of codes, each handling a particular set of conditions, but none suitable for use on robots with widely varying tasks, objectives, constraints, or environments. On the other hand, most DOE missions and tasks are typically ''batches of one''. Attempting to use commercial codes for such work requires significant personnel and schedule costs for re-programming or adding code to the robots whenever a change in task objective, robot configuration, number and type of constraint, etc. occurs. The objective of our project is to develop a ''generic code'' to implement this Task-space to Joint-Space transformation that would allow robot behavior adaptation, in real time (at loop rate), to changes in task objectives, number and type of constraints, modes of controls, kinematics configuration (e.g., new tools, added module). Our specific goal is to develop a single code for the general solution of under-specified systems of algebraic equations that is suitable for solving the inverse kinematics of robots, is useable for all types of robots (mobile robots, manipulators, mobile manipulators, etc.) with no limitation on the number of joints and the number of controlled Task-Space variables, can adapt to real time changes in number and

  10. Image Coding Based on Address Vector Quantization.

    Science.gov (United States)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing

  11. Real-time range acquisition by adaptive structured light.

    Science.gov (United States)

    Koninckx, Thomas P; Van Gool, Luc

    2006-03-01

    The goal of this paper is to provide a "self-adaptive" system for real-time range acquisition. Reconstructions are based on a single frame structured light illumination. Instead of using generic, static coding that is supposed to work under all circumstances, system adaptation is proposed. This occurs on-the-fly and renders the system more robust against instant scene variability and creates suitable patterns at startup. A continuous trade-off between speed and quality is made. A weighted combination of different coding cues--based upon pattern color, geometry, and tracking--yields a robust way to solve the correspondence problem. The individual coding cues are automatically adapted within a considered family of patterns. The weights to combine them are based on the average consistency with the result within a small time-window. The integration itself is done by reformulating the problem as a graph cut. Also, the camera-projector configuration is taken into account for generating the projection patterns. The correctness of the range maps is not guaranteed, but an estimation of the uncertainty is provided for each part of the reconstruction. Our prototype is implemented using unmodified consumer hardware only and, therefore, is cheap. Frame rates vary between 10 and 25 fps, dependent on scene complexity.

  12. A biological inspired fuzzy adaptive window median filter (FAWMF) for enhancing DNA signal processing.

    Science.gov (United States)

    Ahmad, Muneer; Jung, Low Tan; Bhuiyan, Al-Amin

    2017-10-01

    Digital signal processing techniques commonly employ fixed length window filters to process the signal contents. DNA signals differ in characteristics from common digital signals since they carry nucleotides as contents. The nucleotides own genetic code context and fuzzy behaviors due to their special structure and order in DNA strand. Employing conventional fixed length window filters for DNA signal processing produce spectral leakage and hence results in signal noise. A biological context aware adaptive window filter is required to process the DNA signals. This paper introduces a biological inspired fuzzy adaptive window median filter (FAWMF) which computes the fuzzy membership strength of nucleotides in each slide of window and filters nucleotides based on median filtering with a combination of s-shaped and z-shaped filters. Since coding regions cause 3-base periodicity by an unbalanced nucleotides' distribution producing a relatively high bias for nucleotides' usage, such fundamental characteristic of nucleotides has been exploited in FAWMF to suppress the signal noise. Along with adaptive response of FAWMF, a strong correlation between median nucleotides and the Π shaped filter was observed which produced enhanced discrimination between coding and non-coding regions contrary to fixed length conventional window filters. The proposed FAWMF attains a significant enhancement in coding regions identification i.e. 40% to 125% as compared to other conventional window filters tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. This study proves that conventional fixed length window filters applied to DNA signals do not achieve significant results since the nucleotides carry genetic code context. The proposed FAWMF algorithm is adaptive and outperforms significantly to process DNA signal contents. The algorithm applied to variety of DNA datasets produced noteworthy discrimination between coding and non-coding regions contrary

  13. A New Multistage Lattice Vector Quantization with Adaptive Subband Thresholding for Image Compression

    Directory of Open Access Journals (Sweden)

    J. Soraghan

    2007-01-01

    Full Text Available Lattice vector quantization (LVQ reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by “blowing out” the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images.

  14. A New Multistage Lattice Vector Quantization with Adaptive Subband Thresholding for Image Compression

    Directory of Open Access Journals (Sweden)

    Salleh MFM

    2007-01-01

    Full Text Available Lattice vector quantization (LVQ reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by "blowing out" the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images.

  15. The first accident simulation of Angra-1 power plant using the ALMOD computer code

    International Nuclear Information System (INIS)

    Camargo, C.T.M.

    1981-01-01

    The implementation of the german computer code ALMOD and its application in the calculation of Angra-1, a nuclear power plant different from the KWU power plants, demanded study and models adaptation, and due to economic reasons simplifications and optimizations were necessary. The first results define the analytical potential of the computer code, confirm the adequacy of the adaptations done and provide relevant conclusions about the Angra-1 safety analysis, showing at the same time areas in which the model can be applied or simply improved. (E.G.) [pt

  16. Bilayer Protograph Codes for Half-Duplex Relay Channels

    Science.gov (United States)

    Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria

    2013-01-01

    Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive

  17. Importance biasing scheme implemented in the PRIZMA code

    International Nuclear Information System (INIS)

    Kandiev, I.Z.; Malyshkin, G.N.

    1997-01-01

    PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities

  18. Computerized coding system for life narratives to assess students' personality adaption

    NARCIS (Netherlands)

    He, Q.; Veldkamp, B.P.; Westerhof, G.J.; Pechenizkiy, Mykola; Calders, Toon; Conati, Cristina; Ventura, Sebastian; Romero, Cristobal; Stamper, John

    2011-01-01

    The present study is a trial in developing an automatic computerized coding framework with text mining techniques to identify the characteristics of redemption and contamination in life narratives written by undergraduate students. In the initial stage of text classification, the keyword-based

  19. Hybrid Strategies for Link Adaptation Exploiting Several Degrees of Freedom in OFDM Based Broadband Wireless Systems

    DEFF Research Database (Denmark)

    Das, Suvra S.; Rahman, Muhammad Imadur; Wang, Yuanye

    2007-01-01

    In orthogonal frequency division multiplexing (OFDM) systems, there are several degrees of freedom in time and frequency domain, such as, sub-band size, forward error control coding (FEC) rate, modulation order, power level, modulation adaptation interval, coding rate adaptation interval and powe...... of the link parameters based on the channel conditions would lead to highly complex systems with high overhead. Hybrid strategies to vary the adaptation rates to tradeoff achievable efficiency and complexity are presented in this work....

  20. Improvement of FLOWER code and its application in Daya Bay

    International Nuclear Information System (INIS)

    Zhang Shaodong; Zhang Yongxing

    1995-01-01

    FLOWER, a computer code recommended by USNRC for assessing the environmental impact in tidal regions, was adapted and improved so as to be suitable to deal with the influence of drift stream along seashore to the dilution of contaminants and heat in the bay mouth. And the code outputs were presented with more mid-results such as average concentrations and temperature values for all tides considered. Finally, the modified code is applied to the dispersion calculation of heat and liquid effluents from Daya Bay Nuclear Power Plant, and the impacts from routine operation of the plant on Daya Bay sea waters were given

  1. Distributed coding/decoding complexity in video sensor networks.

    Science.gov (United States)

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  2. Code of conduct for scientists (abstract)

    International Nuclear Information System (INIS)

    Khurshid, S.J.

    2011-01-01

    The emergence of advanced technologies in the last three decades and extraordinary progress in our knowledge on the basic Physical, Chemical and Biological properties of living matter has offered tremendous benefits to human beings but simultaneously highlighted the need of higher awareness and responsibility by the scientists of 21 century. Scientist is not born with ethics, nor science is ethically neutral, but there are ethical dimensions to scientific work. There is need to evolve an appropriate Code of Conduct for scientist particularly working in every field of Science. However, while considering the contents, promulgation and adaptation of Codes of Conduct for Scientists, a balance is needed to be maintained between freedom of scientists and at the same time some binding on them in the form of Code of Conducts. The use of good and safe laboratory procedures, whether, codified by law or by common practice must also be considered as part of the moral duties of scientists. It is internationally agreed that a general Code of Conduct can't be formulated for all the scientists universally, but there should be a set of 'building blocks' aimed at establishing the Code of Conduct for Scientists either as individual researcher or responsible for direction, evaluation, monitoring of scientific activities at the institutional or organizational level. (author)

  3. Application of Best Estimate Approach for Modelling of QUENCH-03 and QUENCH-06 Experiments

    Directory of Open Access Journals (Sweden)

    Tadas Kaliatka

    2016-04-01

    In this article, the QUENCH-03 and QUENCH-06 experiments are modelled using ASTEC and RELAP/SCDAPSIM codes. For the uncertainty and sensitivity analysis, SUSA3.5 and SUNSET tools were used. The article demonstrates that applying the best estimate approach, it is possible to develop basic QUENCH input deck and to develop the two sets of input parameters, covering maximal and minimal ranges of uncertainties. These allow simulating different (but with the same nature tests, receiving calculation results with the evaluated range of uncertainties.

  4. Source term evaluation for accident transients in the experimental fusion facility ITER

    Energy Technology Data Exchange (ETDEWEB)

    Virot, F.; Barrachin, M.; Cousin, F. [IRSN, BP3-13115, Saint Paul lez Durance (France)

    2015-03-15

    We have studied the transport and chemical speciation of radio-toxic and toxic species for an event of water ingress in the vacuum vessel of experimental fusion facility ITER with the ASTEC code. In particular our evaluation takes into account an assessed thermodynamic data for the beryllium gaseous species. This study shows that deposited beryllium dusts of atomic Be and Be(OH){sub 2} are formed. It also shows that Be(OT){sub 2} could exist in some conditions in the drain tank. (authors)

  5. Development of sub-channel/system coupled code and its application to a supercritical water-cooled test loop

    International Nuclear Information System (INIS)

    Liu, X.J.; Yang, T.; Cheng, X.

    2014-01-01

    To analyze the local thermal-hydraulic parameters in the supercritical water reactor-fuel qualification test (SCWR-FQT) fuel bundle with a flow blockage, a coupled sub-channel and system code system is developed in this paper. Both of the sub-channel code and system code are adapted to transient analysis of SCWR. Two codes are coupled by data transfer and data adaptation at the interface. In the coupled code, the whole system behavior including safety system characteristic is analyzed by system code ATHLET-SC, whereas the local thermal-hydraulic parameters are predicted by the sub-channel code COBRA-SC. Sensitivity analysis are carried out respectively in ATHLET-SC and COBRA-SC code, to identify the appropriate models for description of the flow blockage phenomenon in the test loop. Some measures to mitigate the accident consequence are also trialed to demonstrate their effectiveness. The results indicate that the new developed code has good feasibility to transient analysis of supercritical water-cooled test. And the peak cladding temperature caused by blockage in the fuel assembly can be reduced effectively by the safety measures of SCWR-FQT. (author)

  6. Sub-channel/system coupled code development and its application to SCWR-FQT loop

    International Nuclear Information System (INIS)

    Liu, X.J.; Cheng, X.

    2015-01-01

    Highlights: • A coupled code is developed for SCWR accident simulation. • The feasibility of the code is shown by application to SCWR-FQT loop. • Some measures are selected by sensitivity analysis. • The peak cladding temperature can be reduced effectively by the proposed measures. - Abstract: In the frame of Super-Critical Reactor In Pipe Test Preparation (SCRIPT) project in China, one of the challenge tasks is to predict the transient performance of SuperCritical Water Reactor-Fuel Qualification Test (SCWR-FQT) loop under some accident conditions. Several thermal–hydraulic codes (system code, sub-channel code) are selected to perform the safety analysis. However, the system code cannot simulate the local behavior of the test bundle, and the sub-channel code is incapable of calculating the whole system behavior of the test loop. Therefore, to combine the merits of both codes, and minimizes their shortcomings, a coupled sub-channel and system code system is developed in this paper. Both of the sub-channel code COBRA-SC and system code ATHLET-SC are adapted to transient analysis of SCWR. Two codes are coupled by data transfer and data adaptation at the interface. In the new developed coupled code, the whole system behavior including safety system characteristic is analyzed by system code ATHLET-SC, whereas the local thermal–hydraulic parameters are predicted by the sub-channel code COBRA-SC. The codes are utilized to get the local thermal–hydraulic parameters in the SCWR-FQT fuel bundle under some accident case (e.g. a flow blockage during LOCA). Some measures to mitigate the accident consequence are proposed by the sensitivity study and trialed to demonstrate their effectiveness in the coupled simulation. The results indicate that the new developed code has good feasibility to transient analysis of supercritical water-cooled test. And the peak cladding temperature caused by blockage in the fuel bundle can be reduced effectively by the safety measures

  7. Sub-channel/system coupled code development and its application to SCWR-FQT loop

    Energy Technology Data Exchange (ETDEWEB)

    Liu, X.J., E-mail: xiaojingliu@sjtu.edu.cn [School of Nuclear Science and Engineering, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai 200240 (China); Cheng, X. [Institute of Fusion and Reactor Technology, Karlsruhe Institute of Technology, Vincenz-Prießnitz-Str. 3, 76131 Karlsruhe (Germany)

    2015-04-15

    Highlights: • A coupled code is developed for SCWR accident simulation. • The feasibility of the code is shown by application to SCWR-FQT loop. • Some measures are selected by sensitivity analysis. • The peak cladding temperature can be reduced effectively by the proposed measures. - Abstract: In the frame of Super-Critical Reactor In Pipe Test Preparation (SCRIPT) project in China, one of the challenge tasks is to predict the transient performance of SuperCritical Water Reactor-Fuel Qualification Test (SCWR-FQT) loop under some accident conditions. Several thermal–hydraulic codes (system code, sub-channel code) are selected to perform the safety analysis. However, the system code cannot simulate the local behavior of the test bundle, and the sub-channel code is incapable of calculating the whole system behavior of the test loop. Therefore, to combine the merits of both codes, and minimizes their shortcomings, a coupled sub-channel and system code system is developed in this paper. Both of the sub-channel code COBRA-SC and system code ATHLET-SC are adapted to transient analysis of SCWR. Two codes are coupled by data transfer and data adaptation at the interface. In the new developed coupled code, the whole system behavior including safety system characteristic is analyzed by system code ATHLET-SC, whereas the local thermal–hydraulic parameters are predicted by the sub-channel code COBRA-SC. The codes are utilized to get the local thermal–hydraulic parameters in the SCWR-FQT fuel bundle under some accident case (e.g. a flow blockage during LOCA). Some measures to mitigate the accident consequence are proposed by the sensitivity study and trialed to demonstrate their effectiveness in the coupled simulation. The results indicate that the new developed code has good feasibility to transient analysis of supercritical water-cooled test. And the peak cladding temperature caused by blockage in the fuel bundle can be reduced effectively by the safety measures

  8. The use of best estimate codes to improve the simulation in real time

    International Nuclear Information System (INIS)

    Rivero, N.; Esteban, J. A.; Lenhardt, G.

    2007-01-01

    Best estimate codes are assumed to be the technology solution providing the most realistic and accurate response. Best estimate technology provides a complementary solution to the conservative simulation technology usually applied to determine plant safety margins and perform security related studies. Tecnatom in the early 90's, within the MAS project, pioneered the initiative to implement best estimate code in its training simulators. Result of this project was the implementation of the first six-equations thermal hydraulic code worldwide (TRAC R T), running in a training environment. To meet real time and other specific training requirements, it was necessary to overcome important difficulties. Tecnatom has just adapted the Global Nuclear Fuel core Design code: PANAC 11, and is about to complete the General Electric TRACG04 thermal hydraulic code adaptation. This technology features a unique solution for nuclear plants aiming at providing the highest fidelity in simulation, enabling to consider the simulator as a multipurpose: engineering and training, simulation platform. Besides, a visual environment designed to optimize the models life cycle, covering both pre and post-processing activities, is in its late development phase. (Author)

  9. Coded Shack-Hartmann Wavefront Sensor

    KAUST Repository

    Wang, Congli

    2016-12-01

    Wavefront sensing is an old yet fundamental problem in adaptive optics. Traditional wavefront sensors are limited to time-consuming measurements, complicated and expensive setup, or low theoretically achievable resolution. In this thesis, we introduce an optically encoded and computationally decodable novel approach to the wavefront sensing problem: the Coded Shack-Hartmann. Our proposed Coded Shack-Hartmann wavefront sensor is inexpensive, easy to fabricate and calibrate, highly sensitive, accurate, and with high resolution. Most importantly, using simple optical flow tracking combined with phase smoothness prior, with the help of modern optimization technique, the computational part is split, efficient, and parallelized, hence real time performance has been achieved on Graphics Processing Unit (GPU), with high accuracy as well. This is validated by experimental results. We also show how optical flow intensity consistency term can be derived, using rigor scalar diffraction theory with proper approximation. This is the true physical law behind our model. Based on this insight, Coded Shack-Hartmann can be interpreted as an illumination post-modulated wavefront sensor. This offers a new theoretical approach for wavefront sensor design.

  10. High-resolution multi-code implementation of unsteady Navier-Stokes flow solver based on paralleled overset adaptive mesh refinement and high-order low-dissipation hybrid schemes

    Science.gov (United States)

    Li, Gaohua; Fu, Xiang; Wang, Fuxin

    2017-10-01

    The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.

  11. Adaptive antenna array algorithms and their impact on code division ...

    African Journals Online (AJOL)

    In this paper four each blind adaptive array algorithms are developed, and their performance under different test situations (e.g. A WGN (Additive White Gaussian Noise) channel, and multipath environment) is studied A MATLAB test bed is created to show their performance on these two test situations and an optimum one ...

  12. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Fossorier Marc

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope -ary phase shift key ( -PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded -PSK signaling (with . Then, it is extended to include coded -PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded -PSK signaling performs 3.1 to 5.2 dB better than uncoded -PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  13. Time course of dynamic range adaptation in the auditory nerve

    Science.gov (United States)

    Wang, Grace I.; Dean, Isabel; Delgutte, Bertrand

    2012-01-01

    Auditory adaptation to sound-level statistics occurs as early as in the auditory nerve (AN), the first stage of neural auditory processing. In addition to firing rate adaptation characterized by a rate decrement dependent on previous spike activity, AN fibers show dynamic range adaptation, which is characterized by a shift of the rate-level function or dynamic range toward the most frequently occurring levels in a dynamic stimulus, thereby improving the precision of coding of the most common sound levels (Wen B, Wang GI, Dean I, Delgutte B. J Neurosci 29: 13797–13808, 2009). We investigated the time course of dynamic range adaptation by recording from AN fibers with a stimulus in which the sound levels periodically switch from one nonuniform level distribution to another (Dean I, Robinson BL, Harper NS, McAlpine D. J Neurosci 28: 6430–6438, 2008). Dynamic range adaptation occurred rapidly, but its exact time course was difficult to determine directly from the data because of the concomitant firing rate adaptation. To characterize the time course of dynamic range adaptation without the confound of firing rate adaptation, we developed a phenomenological “dual adaptation” model that accounts for both forms of AN adaptation. When fitted to the data, the model predicts that dynamic range adaptation occurs as rapidly as firing rate adaptation, over 100–400 ms, and the time constants of the two forms of adaptation are correlated. These findings suggest that adaptive processing in the auditory periphery in response to changes in mean sound level occurs rapidly enough to have significant impact on the coding of natural sounds. PMID:22457465

  14. 3D equilibrium codes for mirror machines

    International Nuclear Information System (INIS)

    Kaiser, T.B.

    1983-01-01

    The codes developed for cumputing three-dimensional guiding center equilibria for quadrupole tandem mirrors are discussed. TEBASCO (Tandem equilibrium and ballooning stability code) is a code developed at LLNL that uses a further expansion of the paraxial equilibrium equation in powers of β (plasma pressure/magnetic pressure). It has been used to guide the design of the TMX-U and MFTF-B experiments at Livermore. Its principal weakness is its perturbative nature, which renders its validity for high-β calculation open to question. In order to compute high-β equilibria, the reduced MHD technique that has been proven useful for determining toroidal equilibria was adapted to the tandem mirror geometry. In this approach, the paraxial expansion of the MHD equations yields a set of coupled nonlinear equations of motion valid for arbitrary β, that are solved as an initial-value problem. Two particular formulations have been implemented in computer codes developed at NYU/Kyoto U and LLNL. They differ primarily in the type of grid, the location of the lateral boundary and the damping techniques employed, and in the method of calculating pressure-balance equilibrium. Discussions on these codes are presented in this paper. (Kato, T.)

  15. Implementation of computer codes for performance assessment of the Republic repository of LLW/ILW Mochovce

    International Nuclear Information System (INIS)

    Hanusik, V.; Kopcani, I.; Gedeon, M.

    2000-01-01

    This paper describes selection and adaptation of computer codes required to assess the effects of radionuclide release from Mochovce Radioactive Waste Disposal Facility. The paper also demonstrates how these codes can be integrated into performance assessment methodology. The considered codes include DUST-MS for source term release, MODFLOW for ground-water flow and BS for transport through biosphere and dose assessment. (author)

  16. Regional Atmospheric Transport Code for Hanford Emission Tracking, Version 2 (RATCHET2)

    International Nuclear Information System (INIS)

    Ramsdell, James V.; Rishel, Jeremy P.

    2006-01-01

    This manual describes the atmospheric model and computer code for the Atmospheric Transport Module within SAC. The Atmospheric Transport Module, called RATCHET2, calculates the time-integrated air concentration and surface deposition of airborne contaminants to the soil. The RATCHET2 code is an adaptation of the Regional Atmospheric Transport Code for Hanford Emissions Tracking (RATCHET). The original RATCHET code was developed to perform the atmospheric transport for the Hanford Environmental Dose Reconstruction Project. Fundamentally, the two sets of codes are identical; no capabilities have been deleted from the original version of RATCHET. Most modifications are generally limited to revision of the run-specification file to streamline the simulation process for SAC.

  17. Regional Atmospheric Transport Code for Hanford Emission Tracking, Version 2(RATCHET2)

    Energy Technology Data Exchange (ETDEWEB)

    Ramsdell, James V.; Rishel, Jeremy P.

    2006-07-01

    This manual describes the atmospheric model and computer code for the Atmospheric Transport Module within SAC. The Atmospheric Transport Module, called RATCHET2, calculates the time-integrated air concentration and surface deposition of airborne contaminants to the soil. The RATCHET2 code is an adaptation of the Regional Atmospheric Transport Code for Hanford Emissions Tracking (RATCHET). The original RATCHET code was developed to perform the atmospheric transport for the Hanford Environmental Dose Reconstruction Project. Fundamentally, the two sets of codes are identical; no capabilities have been deleted from the original version of RATCHET. Most modifications are generally limited to revision of the run-specification file to streamline the simulation process for SAC.

  18. Simulations of Laboratory Astrophysics Experiments using the CRASH code

    Science.gov (United States)

    Trantham, Matthew; Kuranz, Carolyn; Fein, Jeff; Wan, Willow; Young, Rachel; Keiter, Paul; Drake, R. Paul

    2015-11-01

    Computer simulations can assist in the design and analysis of laboratory astrophysics experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport, electron heat conduction and laser ray tracing. This poster will demonstrate some of the experiments the CRASH code has helped design or analyze including: Kelvin-Helmholtz, Rayleigh-Taylor, magnetized flows, jets, and laser-produced plasmas. This work is funded by the following grants: DEFC52-08NA28616, DE-NA0001840, and DE-NA0002032.

  19. Body mass index does not influence post-treatment survival in early stage endometrial cancer: results from the MRC ASTEC trial.

    Science.gov (United States)

    Crosbie, Emma J; Roberts, Chris; Qian, Wendi; Swart, Ann Marie; Kitchener, Henry C; Renehan, Andrew G

    2012-04-01

    Body mass index (BMI) is a major risk factor for endometrial cancer incidence but its impact on post-treatment survival is unclear. We investigated the relationships of BMI (categorised using the WHO definitions) with clinico-pathological characteristics and outcome in women treated within the MRC ASTEC randomised trial, which provides data from patients who received standardised allocated treatments and therefore reduces biases. The impact of BMI on both recurrence-free survival (RFS) and overall survival (OS) was analysed using the Cox regression models. An apriori framework of evaluating potential biases was explored. From 1408 participants, there were 1070 women with determinable BMI (median=29.1 kg/m(2)). Histological types were endometrioid (type 1) in 893 and non-endometrioid (type 2) in 146 women; the proportion of the latter decreasing with increasing BMI (8% versus 19% for obese III WHO category versus normal weight, p(trend)=0.003). For type 1 carcinomas, increasing BMI was associated with less aggressive histopathological features (depth of invasion, p=0.006; tumour grade, p=0.015). With a median follow-up of 34.3 months, there was no influence of BMI on RFS - adjusted HRs per 5 kg/m(2) were 0.98 (95% CI 0.86, 1.13) and 0.95 (0.74, 1.24), for type 1 and 2 carcinomas; and no influence on OS - adjusted HRs per 5 kg/m(2) were 0.96 (0.81, 1.14) and 0.92 (0.70, 1.23), respectively. These findings demonstrate an important principle: that an established link between an exposure (here, obesity) and increased incident cancer risk, does not necessarily translate into an inferior outcome following treatment for that cancer. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Determination of the hydrogen source term during the reflooding of an overheated core: Calculation results of the integral reflood test QUENCH-03 with PWR-type bundle

    International Nuclear Information System (INIS)

    Chikhi, Nourdine; Nguyen, Nam Giang; Fleurot, Joelle

    2012-01-01

    Highlights: ► Calculation of QUENCH-03 experiment with ASTEC/CATHARE. ► Validation of reflooding model in severe accidents conditions. ► Demonstration of a minimum flow rate for a successful reflood by using a system code. ► Effect of injection flow rate on hydrogen production. ► Effect of initial core temperature on hydrogen production. - Abstract: During a severe accident, one of the main accident management procedure consists of injecting water in the reactor core by means of various safety injection devices. Nevertheless, the success of a core reflood is not guaranteed because of possible negative effects: temperature escalation, enhanced hydrogen production, enhanced release of fission products, core degradation due to thermal shock, shattering, debris and melt formation. The QUENCH-03 experiment was carried out to investigate the behavior on reflooding at high temperature of LWR fuel rods with little oxidation. Posttest calculations with the ASTEC-CATHARE V2 code were made for code assessment and validation of the new reflooding model. This thermal–hydraulic model is used to detect the quench front position and to calculate the heat transfer between fuel and fluid in the transition boiling region. Comparisons between the calculational and experimental results are presented. Emphasis has been placed on clad temperature, hydrogen production and melt relocation. The effects of core state damage (initial temperature at reflooding onset) and the reflood mass flow rate on the hydrogen source term were investigated using the QUENCH-03 test as a base case. Calculations were made by varying both parameters in the input data deck. The results demonstrate (and confirm) the existence of a minimum flow rate for a successful reflood.

  1. Benefit of adaptive FEC in shared backup path protected elastic optical network.

    Science.gov (United States)

    Guo, Hong; Dai, Hua; Wang, Chao; Li, Yongcheng; Bose, Sanjay K; Shen, Gangxiang

    2015-07-27

    We apply an adaptive forward error correction (FEC) allocation strategy to an Elastic Optical Network (EON) operated with shared backup path protection (SBPP). To maximize the protected network capacity that can be carried, an Integer Linear Programing (ILP) model and a spectrum window plane (SWP)-based heuristic algorithm are developed. Simulation results show that the FEC coding overhead required by the adaptive FEC scheme is significantly lower than that needed by a fixed FEC allocation strategy resulting in higher network capacity for the adaptive strategy. The adaptive FEC allocation strategy can also significantly outperform the fixed FEC allocation strategy both in terms of the spare capacity redundancy and the average FEC coding overhead needed per optical channel. The proposed heuristic algorithm is efficient and not only performs closer to the ILP model but also does much better than the shortest-path algorithm.

  2. Evaluation of a new neutron energy spectrum unfolding code based on an Adaptive Neuro-Fuzzy Inference System (ANFIS).

    Science.gov (United States)

    Hosseini, Seyed Abolfazl; Esmaili Paeen Afrakoti, Iman

    2018-01-17

    The purpose of the present study was to reconstruct the energy spectrum of a poly-energetic neutron source using an algorithm developed based on an Adaptive Neuro-Fuzzy Inference System (ANFIS). ANFIS is a kind of artificial neural network based on the Takagi-Sugeno fuzzy inference system. The ANFIS algorithm uses the advantages of both fuzzy inference systems and artificial neural networks to improve the effectiveness of algorithms in various applications such as modeling, control and classification. The neutron pulse height distributions used as input data in the training procedure for the ANFIS algorithm were obtained from the simulations performed by MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). Taking into account the normalization condition of each energy spectrum, 4300 neutron energy spectra were generated randomly. (The value in each bin was generated randomly, and finally a normalization of each generated energy spectrum was performed). The randomly generated neutron energy spectra were considered as output data of the developed ANFIS computational code in the training step. To calculate the neutron energy spectrum using conventional methods, an inverse problem with an approximately singular response matrix (with the determinant of the matrix close to zero) should be solved. The solution of the inverse problem using the conventional methods unfold neutron energy spectrum with low accuracy. Application of the iterative algorithms in the solution of such a problem, or utilizing the intelligent algorithms (in which there is no need to solve the problem), is usually preferred for unfolding of the energy spectrum. Therefore, the main reason for development of intelligent algorithms like ANFIS for unfolding of neutron energy spectra is to avoid solving the inverse problem. In the present study, the unfolded neutron energy spectra of 252Cf and 241Am-9Be neutron sources using the developed computational code were

  3. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  4. On locality of Generalized Reed-Muller codes over the broadcast erasure channel

    KAUST Repository

    Alloum, Amira

    2016-07-28

    One to Many communications are expected to be among the killer applications for the currently discussed 5G standard. The usage of coding mechanisms is impacting broadcasting standard quality, as coding is involved at several levels of the stack, and more specifically at the application layer where Rateless, LDPC, Reed Slomon codes and network coding schemes have been extensively studied, optimized and standardized in the past. Beyond reusing, extending or adapting existing application layer packet coding mechanisms based on previous schemes and designed for the foregoing LTE or other broadcasting standards; our purpose is to investigate the use of Generalized Reed Muller codes and the value of their locality property in their progressive decoding for Broadcast/Multicast communication schemes with real time video delivery. Our results are meant to bring insight into the use of locally decodable codes in Broadcasting. © 2016 IEEE.

  5. Efficient coding schemes with power allocation using space-time-frequency spreading

    Institute of Scientific and Technical Information of China (English)

    Jiang Haining; Luo Hanwen; Tian Jifeng; Song Wentao; Liu Xingzhao

    2006-01-01

    An efficient space-time-frequency (STF) coding strategy for multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) systems is presented for high bit rate data transmission over frequency selective fading channels. The proposed scheme is a new approach to space-time-frequency coded OFDM (COFDM) that combines OFDM with space-time coding, linear precoding and adaptive power allocation to provide higher quality of transmission in terms of the bit error rate performance and power efficiency. In addition to exploiting the maximum diversity gain in frequency, time and space, the proposed scheme enjoys high coding advantages and low-complexity decoding. The significant performance improvement of our design is confirmed by corroborating numerical simulations.

  6. CONSUL code package application for LMFR core calculations

    Energy Technology Data Exchange (ETDEWEB)

    Chibinyaev, A.V.; Teplov, P.S.; Frolova, M.V. [RNC ' Kurchatovskiy institute' , Kurchatov sq.1, Moscow (Russian Federation)

    2008-07-01

    CONSUL code package designed for the calculation of reactor core characteristics has been developed at the beginning of 90's. The calculation of nuclear reactor core characteristics is carried out on the basis of correlated neutron, isotope and temperature distributions. The code package has been generally used for LWR core characteristics calculations. At present CONSUL code package was adapted to calculate liquid metal fast reactors (LMFR). The comparisons with IAEA computational test 'Evaluation of benchmark calculations on a fast power reactor core with near zero sodium void effect' and BN-1800 testing calculations are presented in the paper. The IAEA benchmark core is based on the innovative core concept with sodium plenum above the core BN-800. BN-1800 core is the next development step which is foreseen for the Russian fast reactor concept. The comparison of the operational parameters has shown good agreement and confirms the possibility of CONSUL code package application for LMFR core calculation. (authors)

  7. General features of the neutronics design code EQUICYCLE

    International Nuclear Information System (INIS)

    Jirlow, K.

    1978-10-01

    The neutronics code EQUICYCLE has been developed and improved over a long period of time. It is expecially adapted to survey type design calculations of large fast power reactors with particular emphasis on the nuclear parameters for a realistic equilibrium fuel cycle. Thus the code is used to evaluate the breeding performance, the power distributions and the uranium and plutonium mass balance for realistic refuelling schemes. In addition reactivity coefficients can be calculated and the influence of burnup could be assessed. The code is two-dimensional and treats the reactor core in R-Z geometry. The basic ideas of the calculating scheme are successive iterative improvement of cross-section sets and flux spectra and use of the mid-cycle flux for burning the fuel according to a specified refuelling scheme. Normally given peak burn-ups and maximum power densities are used as boundary conditions. The code is capable of handling the unconventional, so called heterogeneous cores. (author)

  8. On the feedback error compensation for adaptive modulation and coding scheme

    KAUST Repository

    Choi, Seyeong; Yang, Hong-Chuan; Alouini, Mohamed-Slim

    2011-01-01

    In this paper, we consider the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of perfect feedback channels. We quantify

  9. Adaptation of GRS calculation codes for Soviet reactors

    International Nuclear Information System (INIS)

    Langenbuch, S.; Petri, A.; Steinborn, J.; Stenbok, I.A.; Suslow, A.I.

    1994-01-01

    The use of ATHLET for incident calculation of WWER has been tested and verified in numerous calculations. Further adaptation may be needed for the WWER 1000 plants. Coupling ATHLET with the 3D nuclear model BIPR-8 for WWER cores clearly improves studies of the influence of neutron kinetics. In the case of FBMK reactors ATHLET calculations show that typical incidents in the complex RMBK reactors can be calculated even though verification still has to be worked on. Results of the 3D-core model QUABOX/CUBBOX-HYCA show good correlation of calculated and measured values in reactor plants. Calculations carried out to date were used to check essential parameters influencing RBMK core behaviour especially dependence of effective voidre activity on the number of control rods. (orig./HP) [de

  10. Sparsity in Linear Predictive Coding of Speech

    DEFF Research Database (Denmark)

    Giacobello, Daniele

    of the effectiveness of their application in audio processing. The second part of the thesis deals with introducing sparsity directly in the linear prediction analysis-by-synthesis (LPAS) speech coding paradigm. We first propose a novel near-optimal method to look for a sparse approximate excitation using a compressed...... one with direct applications to coding but also consistent with the speech production model of voiced speech, where the excitation of the all-pole filter can be modeled as an impulse train, i.e., a sparse sequence. Introducing sparsity in the LP framework will also bring to de- velop the concept...... sensing formulation. Furthermore, we define a novel re-estimation procedure to adapt the predictor coefficients to the given sparse excitation, balancing the two representations in the context of speech coding. Finally, the advantages of the compact parametric representation of a segment of speech, given...

  11. The adaptive collision source method for discrete ordinates radiation transport

    International Nuclear Information System (INIS)

    Walters, William J.; Haghighat, Alireza

    2017-01-01

    Highlights: • A new adaptive quadrature method to solve the discrete ordinates transport equation. • The adaptive collision source (ACS) method splits the flux into n’th collided components. • Uncollided flux requires high quadrature; this is lowered with number of collisions. • ACS automatically applies appropriate quadrature order each collided component. • The adaptive quadrature is 1.5–4 times more efficient than uniform quadrature. - Abstract: A novel collision source method has been developed to solve the Linear Boltzmann Equation (LBE) more efficiently by adaptation of the angular quadrature order. The angular adaptation method is unique in that the flux from each scattering source iteration is obtained, with potentially a different quadrature order used for each. Traditionally, the flux from every iteration is combined, with the same quadrature applied to the combined flux. Since the scattering process tends to distribute the radiation more evenly over angles (i.e., make it more isotropic), the quadrature requirements generally decrease with each iteration. This method allows for an optimal use of processing power, by using a high order quadrature for the first iterations that need it, before shifting to lower order quadratures for the remaining iterations. This is essentially an extension of the first collision source method, and is referred to as the adaptive collision source (ACS) method. The ACS methodology has been implemented in the 3-D, parallel, multigroup discrete ordinates code TITAN. This code was tested on a several simple and complex fixed-source problems. The ACS implementation in TITAN has shown a reduction in computation time by a factor of 1.5–4 on the fixed-source test problems, for the same desired level of accuracy, as compared to the standard TITAN code.

  12. VACOSS - variable coding seal system for nuclear material control

    International Nuclear Information System (INIS)

    Kennepohl, K.; Stein, G.

    1977-12-01

    VACOSS - Variable Coding Seal System - is intended to seal: rooms and containers with nuclear material, nuclear instrumentation and equipment of the operator, instrumentation and equipment at the supervisory authority. It is easy to handle, reusable, transportable and consists of three components: 1. Seal. The light guide in fibre optics with infrared light emitter and receiver serves as lead. The statistical treatment of coded data given in the seal via adapter box guarantees an extremely high degree of access reliability. It is possible to store the data of two undue seal openings together with data concerning time and duration of the opening. 2. The adapter box can be used for input or input and output of data indicating the seal integrity. 3. The simulation programme is located in the computing center of the supervisory authority and permits to determine date and time of opening by decoding the seal memory data. (orig./WB) [de

  13. Hamming Code Based Watermarking Scheme for 3D Model Verification

    Directory of Open Access Journals (Sweden)

    Jen-Tse Wang

    2014-01-01

    Full Text Available Due to the explosive growth of the Internet and maturing of 3D hardware techniques, protecting 3D objects becomes a more and more important issue. In this paper, a public hamming code based fragile watermarking technique is proposed for 3D objects verification. An adaptive watermark is generated from each cover model by using the hamming code technique. A simple least significant bit (LSB substitution technique is employed for watermark embedding. In the extraction stage, the hamming code based watermark can be verified by using the hamming code checking without embedding any verification information. Experimental results shows that 100% vertices of the cover model can be watermarked, extracted, and verified. It also shows that the proposed method can improve security and achieve low distortion of stego object.

  14. Analyzing and modeling the BIP Orgi aqueous formation tests with ASTEC-IODE code

    Energy Technology Data Exchange (ETDEWEB)

    Vela-Garcia, M.; Herranz, L. E.

    2011-07-01

    In the event of a severe accident, some fission products could be released from the fuel and become airborne in the reactor containment atmosphere. The potential radiological impact of iodine in case of a postulated severe accident because of its bio-sensitivity (Thyroid) and volatility makes iodine become one of the most important concerns in these scenarios.

  15. The Department of Defense Critical Technologies Plan for the Committees on Armed Services United States Congress

    Science.gov (United States)

    1991-05-01

    Early focus is on investigag advanced simulation technology for engine controls ( ASTEC ) to expl1oit increases in computational power, and on... ASTEC (FY 1992). 0 1 A * F electronics (FY 1992). * Fiber optics sensors/integration (FY 1994). * lightweight nozzle actuator (1FY 1995). 14-10

  16. Visual communication with retinex coding.

    Science.gov (United States)

    Huck, F O; Fales, C L; Davis, R E; Alter-Gartenberg, R

    2000-04-10

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  17. Visual Communication with Retinex Coding

    Science.gov (United States)

    Huck, Friedrich O.; Fales, Carl L.; Davis, Richard E.; Alter-Gartenberg, Rachel

    2000-04-01

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  18. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Marc Fossorier

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope M-ary phase shift key (M-PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded M-PSK signaling (with M=2k. Then, it is extended to include coded M-PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded M-PSK signaling performs 3.1 to 5.2 dB better than uncoded M-PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  19. Adaptive tree multigrids and simplified spherical harmonics approximation in deterministic neutral and charged particle transport

    International Nuclear Information System (INIS)

    Kotiluoto, P.

    2007-05-01

    A new deterministic three-dimensional neutral and charged particle transport code, MultiTrans, has been developed. In the novel approach, the adaptive tree multigrid technique is used in conjunction with simplified spherical harmonics approximation of the Boltzmann transport equation. The development of the new radiation transport code started in the framework of the Finnish boron neutron capture therapy (BNCT) project. Since the application of the MultiTrans code to BNCT dose planning problems, the testing and development of the MultiTrans code has continued in conventional radiotherapy and reactor physics applications. In this thesis, an overview of different numerical radiation transport methods is first given. Special features of the simplified spherical harmonics method and the adaptive tree multigrid technique are then reviewed. The usefulness of the new MultiTrans code has been indicated by verifying and validating the code performance for different types of neutral and charged particle transport problems, reported in separate publications. (orig.)

  20. Development of parallel Fokker-Planck code ALLAp

    International Nuclear Information System (INIS)

    Batishcheva, A.A.; Sigmar, D.J.; Koniges, A.E.

    1996-01-01

    We report on our ongoing development of the 3D Fokker-Planck code ALLA for a highly collisional scrape-off-layer (SOL) plasma. A SOL with strong gradients of density and temperature in the spatial dimension is modeled. Our method is based on a 3-D adaptive grid (in space, magnitude of the velocity, and cosine of the pitch angle) and a second order conservative scheme. Note that the grid size is typically 100 x 257 x 65 nodes. It was shown in our previous work that only these capabilities make it possible to benchmark a 3D code against a spatially-dependent self-similar solution of a kinetic equation with the Landau collision term. In the present work we show results of a more precise benchmarking against the exact solutions of the kinetic equation using a new parallel code ALLAp with an improved method of parallelization and a modified boundary condition at the plasma edge. We also report first results from the code parallelization using Message Passing Interface for a Massively Parallel CRI T3D platform. We evaluate the ALLAp code performance versus the number of T3D processors used and compare its efficiency against a Work/Data Sharing parallelization scheme and a workstation version

  1. Cross-band noise model refinement for transform domain Wyner–Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2012-01-01

    TDWZ video coding trails that of conventional video coding solutions, mainly due to the quality of side information, inaccurate noise modeling and loss in the final coding step. The major goal of this paper is to enhance the accuracy of the noise modeling, which is one of the most important aspects...... influencing the coding performance of DVC. A TDWZ video decoder with a novel cross-band based adaptive noise model is proposed, and a noise residue refinement scheme is introduced to successively update the estimated noise residue for noise modeling after each bit-plane. Experimental results show...... that the proposed noise model and noise residue refinement scheme can improve the rate-distortion (RD) performance of TDWZ video coding significantly. The quality of the side information modeling is also evaluated by a measure of the ideal code length....

  2. Finding your way through EOL challenges in the ICU using Adaptive Leadership behaviours: A qualitative descriptive case study.

    Science.gov (United States)

    Adams, Judith A; Bailey, Donald E; Anderson, Ruth A; Thygeson, Marcus

    2013-12-01

    Using the Adaptive Leadership framework, we describe behaviours that providers used while interacting with family members facing the challenges of recognising that their loved one was dying in the ICU. In this prospective pilot case study, we selected one ICU patient with end-stage illness who lacked decision-making capacity. Participants included four family members, one nurse and two physicians. The principle investigator observed and recorded three family conferences and conducted one in-depth interview with the family. Three members of the research team independently coded the transcripts using a priori codes to describe the Adaptive Leadership behaviours that providers used to facilitate the family's adaptive work, met to compare and discuss the codes and resolved all discrepancies. We identified behaviours used by nurses and physicians that facilitated the family's ability to adapt to the impending death of a loved one. Examples of these behaviours include defining the adaptive challenges for families and foreshadowing a poor prognosis. Nurse and physician Adaptive Leadership behaviours can facilitate the transition from curative to palliative care by helping family members do the adaptive work of letting go. Further research is warranted to create knowledge for providers to help family members adapt. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Automatic coding and selection of causes of death: an adaptation of Iris software for using in Brazil.

    Science.gov (United States)

    Martins, Renata Cristófani; Buchalla, Cassia Maria

    2015-01-01

    To prepare a dictionary in Portuguese for using in Iris and to evaluate its completeness for coding causes of death. Iniatially, a dictionary with all illness and injuries was created based on the International Classification of Diseases - tenth revision (ICD-10) codes. This dictionary was based on two sources: the electronic file of ICD-10 volume 1 and the data from Thesaurus of the International Classification of Primary Care (ICPC-2). Then, a death certificate sample from the Program of Improvement of Mortality Information in São Paulo (PRO-AIM) was coded manually and by Iris version V4.0.34, and the causes of death were compared. Whenever Iris was not able to code the causes of death, adjustments were made in the dictionary. Iris was able to code all causes of death in 94.4% death certificates, but only 50.6% were directly coded, without adjustments. Among death certificates that the software was unable to fully code, 89.2% had a diagnosis of external causes (chapter XX of ICD-10). This group of causes of death showed less agreement when comparing the coding by Iris to the manual one. The software performed well, but it needs adjustments and improvement in its dictionary. In the upcoming versions of the software, its developers are trying to solve the external causes of death problem.

  4. Worldwide Report, Arms Control.

    Science.gov (United States)

    1986-02-04

    8217Unpredictable Consequences’ of SDI (Moscow PRAVDA, 7 Dec 85) 22 Moscow TV on ASTEC Meeting, Military Monopolies, SDI (Tomas Kolesnichenko; Moscow...planet. /8309 CSO: 5200/1228 22 JPRS-TAO86*014 4 February 1986 SDI AND SPACE ARMS MOSCOW TV ON ASTEC MEETING, MILITARY MONOPOLIES, SDI

  5. Adaptation improves face trustworthiness discrimination

    Directory of Open Access Journals (Sweden)

    Bruce D Keefe

    2013-06-01

    Full Text Available Adaptation to facial characteristics, such as gender and viewpoint, has been shown to both bias our perception of faces and improve facial discrimination. In this study, we examined whether adapting to two levels of face trustworthiness improved sensitivity around the adapted level. Facial trustworthiness was manipulated by morphing between trustworthy and untrustworthy prototypes, each generated by morphing eight trustworthy and eight untrustworthy faces respectively. In the first experiment, just-noticeable differences (JNDs were calculated for an untrustworthy face after participants adapted to an untrustworthy face, a trustworthy face, or did not adapt. In the second experiment, the three conditions were identical, except that JNDs were calculated for a trustworthy face. In the third experiment we examined whether adapting to an untrustworthy male face improved discrimination to an untrustworthy female face. In all experiments, participants completed a two-interval forced-choice adaptive staircase procedure, in which they judged which face was more untrustworthy. JNDs were derived from a psychometric function fitted to the data. Adaptation improved sensitivity to faces conveying the same level of trustworthiness when compared to no adaptation. When adapting to and discriminating around a different level of face trustworthiness there was no improvement in sensitivity and JNDs were equivalent to those in the no adaptation condition. The improvement in sensitivity was found to occur even when adapting to a face with different gender and identity. These results suggest that adaptation to facial trustworthiness can selectively enhance mechanisms underlying the coding of facial trustworthiness to improve perceptual sensitivity. These findings have implications for the role of our visual experience in the decisions we make about the trustworthiness of other individuals.

  6. THREEDANT: A code to perform three-dimensional, neutral particle transport calculations

    International Nuclear Information System (INIS)

    Alcouffe, R.E.

    1994-01-01

    The THREEDANT code solves the three-dimensional neutral particle transport equation in its first order, multigroup, discrate ordinate form. The code allows an unlimited number of groups (depending upon the cross section set), angular quadrature up to S-100, and unlimited Pn order again depending upon the cross section set. The code has three options for spatial differencing, diamond with set-to-zero fixup, adaptive weighted diamond, and linear modal. The geometry options are XYZ and RZΘ with a special XYZ option based upon a volume fraction method. This allows objects or bodies of any shape to be modelled as input which gives the code as much geometric description flexibility as the Monte Carlo code MCNP. The transport equation is solved by source iteration accelerated by the DSA method. Both inner and outer iterations are so accelerated. Some results are presented which demonstrate the effectiveness of these techniques. The code is available on several types of computing platforms

  7. XGC developments for a more efficient XGC-GENE code coupling

    Science.gov (United States)

    Dominski, Julien; Hager, Robert; Ku, Seung-Hoe; Chang, Cs

    2017-10-01

    In the Exascale Computing Program, the High-Fidelity Whole Device Modeling project initially aims at delivering a tightly-coupled simulation of plasma neoclassical and turbulence dynamics from the core to the edge of the tokamak. To permit such simulations, the gyrokinetic codes GENE and XGC will be coupled together. Numerical efforts are made to improve the numerical schemes agreement in the coupling region. One of the difficulties of coupling those codes together is the incompatibility of their grids. GENE is a continuum grid-based code and XGC is a Particle-In-Cell code using unstructured triangular mesh. A field-aligned filter is thus implemented in XGC. Even if XGC originally had an approximately field-following mesh, this field-aligned filter permits to have a perturbation discretization closer to the one solved in the field-aligned code GENE. Additionally, new XGC gyro-averaging matrices are implemented on a velocity grid adapted to the plasma properties, thus ensuring same accuracy from the core to the edge regions.

  8. Automatically tuned adaptive differencing algorithm for 3-D SN implemented in PENTRAN

    International Nuclear Information System (INIS)

    Sjoden, G.; Courau, T.; Manalo, K.; Yi, C.

    2009-01-01

    We present an adaptive algorithm with an automated tuning feature to augment optimum differencing scheme selection for 3-D S N computations in Cartesian geometry. This adaptive differencing scheme has been implemented in the PENTRAN parallel S N code. Individual fixed zeroth spatial transport moment based schemes, including Diamond Zero (DZ), Directional Theta Weighted (DTW), and Exponential Directional Iterative (EDI) 3-D S N methods were evaluated and compared with solutions generated using a code-tuned adaptive algorithm. Model problems considered include a fixed source slab problem (using reflected y- and z-axes) which contained mixed shielding and diffusive regions, and a 17 x 17 PWR assembly eigenvalue test problem; these problems were benchmarked against multigroup MCNP5 Monte Carlo computations. Both problems were effective in highlighting the performance of the adaptive scheme compared to single schemes, and demonstrated that the adaptive tuning handles exceptions to the standard DZ-DTW-EDI adaptive strategy. The tuning feature includes special scheme selection provisions for optically thin cells, and incorporates the ratio of the angular source density relative to the total angular collision density to best select the differencing method. Overall, the adaptive scheme demonstrated the best overall solution accuracy in the test problems. (authors)

  9. Improving WCDMA netwotk capacity using adaptive sectorisation ...

    African Journals Online (AJOL)

    A major problem affecting the capacity of Wideband Code Division Multiple Access (WCDMA) is interference. This work focuses on reducing co-channel interference problem by the application of adaptive sectorisation in nonuniform traffic. It considers an isolated areas of congested traffic called Hot Spots (HS).

  10. Using the adaptive blockset for simulation and rapid prototyping

    DEFF Research Database (Denmark)

    Ravn, Ole

    1999-01-01

    the gap between simulation and prototype controller implementation. This is done using the code generation capabilities of Real Time Workshop in combination with C s-function blocks for adaptive control in Simulink. In the paper the design of each group of blocks normally fund in adaptive controllers...... is outlined. The block types are, identification, controller design, controller and state variable filter.The use of the Adaptive Blockset is demonstrated using a simple laboratory setup. Both the use of the blockset for simulation and for rapid prototyping of a real-time controller are shown.......The paper presents the design considerations and implementational aspects of the Adaptive Blockset for Simulink which has been developed in a prototype implementation. The basics of indirect adaptive controllers are summarized. The concept behind the Adaptive Blockset for Simulink is to bridge...

  11. Wind power within European grid codes: Evolution, status and outlook

    DEFF Research Database (Denmark)

    Vrana, Til Kristian; Flynn, Damian; Gomez-Lazaro, Emilio

    2018-01-01

    Grid codes are technical specifications that define the requirements for any facility connected to electricity grids. Wind power plants are increasingly facing system stability support requirements similar to conventional power stations, which is to some extent unavoidable, as the share of wind...... power in the generation mix is growing. The adaptation process of grid codes for wind power plants is not yet complete, and grid codes are expected to evolve further in the future. ENTSO-E is the umbrella organization for European TSOs, seen by many as a leader in terms of requirements sophistication...... is largely based on the definitions and provisions set out by ENTSO-E. The main European grid code requirements are outlined here, including also HVDC connections and DC-connected power park modules. The focus is on requirements that are considered particularly relevant for large wind power plants...

  12. Computer codes for simulating atomic-displacement cascades in solids subject to irradiation

    International Nuclear Information System (INIS)

    Asaoka, Takumi; Taji, Yukichi; Tsutsui, Tsuneo; Nakagawa, Masayuki; Nishida, Takahiko

    1979-03-01

    In order to study atomic displacement cascades originating from primary knock-on atoms in solids subject to incident radiation, the simulation code CASCADE/CLUSTER is adapted for use on FACOM/230-75 computer system. In addition, the code is modified so as to plot the defect patterns in crystalline solids. As other simulation code of the cascade process, MARLOWE is also available for use on the FACOM system. To deal with the thermal annealing of point defects produced in the cascade process, the code DAIQUIRI developed originally for body-centered cubic crystals is modified to be applicable also for face-centered cubic lattices. By combining CASCADE/CLUSTER and DAIQUIRI, we then prepared a computer code system CASCSRB to deal with heavy irradiation or saturation damage state of solids at normal temperature. Furthermore, a code system for the simulation of heavy irradiations CASCMARL is available, in which MARLOWE code is substituted for CASCADE in the CASCSRB system. (author)

  13. JPRS Report, Soviet Union, USA: Economics, Politics, Ideology, No. 3, March 1988

    Science.gov (United States)

    1988-08-11

    CPSU Central Committee V.P. Nikonov received President J. Giffen of the American-Soviet Trade and Economic Council ( ASTEC ). 23—Deputies of the USSR...test against the campaign of slander and the instigation of anti-Soviet actions in the Estonian SSR. A.F. Dobrynin had a meeting with ASTEC

  14. Link adaptation performance evaluation for a MIMO-OFDM physical layer in a realistic outdoor environment

    OpenAIRE

    Han, C; Armour, SMD; Doufexi, A; Ng, KH; McGeehan, JP

    2006-01-01

    This paper presents a downlink performance analysis of a link adaptation (LA) algorithm applied to a MIMO-OFDM Physical Layer (PHY) which is a popular candidate for future generation cellular communication systems. The new LA algorithm attempts to maximize throughput and adaptation between various modulation and coding schemes in combination with both space-time block codes (STBC) and spatial multiplexing (SM) is based on knowledge of SNR and H matrix determinant; the parameters which are fou...

  15. Analysis of the SPERT III E-core experiment using the EUREKA-2 code

    International Nuclear Information System (INIS)

    Harami, Taikan; Uemura, Mutsumi; Ohnishi, Nobuaki

    1986-09-01

    EUREKA-2, a coupled nuclear thermal hydrodynamic kinetic code, was adapted for the testing of models and methods. Code evaluations were made with the reactivity addition experiments of the SPERT III E-Core, a slightly enriched oxide core. The code was tested for non damaging power excursions including a wide range of initial operating conditions, such as cold-startup, hot-startup, hot-standby and operating-power initial conditions. Comparisons resulted in a good agreement within the experimental errors between calculated and experimental power, energy, reactivity and clad surface temperature. (author)

  16. Review and Implementation of the Emerging CCSDS Recommended Standard for Multispectral and Hyperspectral Lossless Image Coding

    Science.gov (United States)

    Sanchez, Jose Enrique; Auge, Estanislau; Santalo, Josep; Blanes, Ian; Serra-Sagrista, Joan; Kiely, Aaron

    2011-01-01

    A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the "Fast Lossless" adaptive linear predictive compressor, and is adapted to better overcome issues of onboard scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.

  17. GANDALF - Graphical Astrophysics code for N-body Dynamics And Lagrangian Fluids

    Science.gov (United States)

    Hubber, D. A.; Rosotti, G. P.; Booth, R. A.

    2018-01-01

    GANDALF is a new hydrodynamics and N-body dynamics code designed for investigating planet formation, star formation and star cluster problems. GANDALF is written in C++, parallelized with both OPENMP and MPI and contains a PYTHON library for analysis and visualization. The code has been written with a fully object-oriented approach to easily allow user-defined implementations of physics modules or other algorithms. The code currently contains implementations of smoothed particle hydrodynamics, meshless finite-volume and collisional N-body schemes, but can easily be adapted to include additional particle schemes. We present in this paper the details of its implementation, results from the test suite, serial and parallel performance results and discuss the planned future development. The code is freely available as an open source project on the code-hosting website github at https://github.com/gandalfcode/gandalf and is available under the GPLv2 license.

  18. Code Modernization of VPIC

    Science.gov (United States)

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  19. Filtering, Coding, and Compression with Malvar Wavelets

    Science.gov (United States)

    1993-12-01

    speech coding techniques being investigated by the military (38). Imagery: Space imagery often requires adaptive restoration to deblur out-of-focus...and blurred image, find an estimate of the ideal image using a priori information about the blur, noise , and the ideal image" (12). The research for...recording can be described as the original signal convolved with impulses , which appear as echoes in the seismic event. The term deconvolution indicates

  20. Epigenetic codes programming class switch recombination

    Directory of Open Access Journals (Sweden)

    Bharat eVaidyanathan

    2015-09-01

    Full Text Available Class switch recombination imparts B cells with a fitness-associated adaptive advantage during a humoral immune response by using a precision-tailored DNA excision and ligation process to swap the default constant region gene of the antibody with a new one that has unique effector functions. This secondary diversification of the antibody repertoire is a hallmark of the adaptability of B cells when confronted with environmental and pathogenic challenges. Given that the nucleotide sequence of genes during class switching remains unchanged (genetic constraints, it is logical and necessary therefore, to integrate the adaptability of B cells to an epigenetic state, which is dynamic and can be heritably modulated before, after or even during an antibody-dependent immune response. Epigenetic regulation encompasses heritable changes that affect function (phenotype without altering the sequence information embedded in a gene, and include histone, DNA and RNA modifications. Here, we review current literature on how B cells use an epigenetic code language as a means to ensure antibody plasticity in light of pathogenic insults.

  1. Coupling of the SYRTHES thermal code with the ESTET or N3S fluid mechanics codes; Couplage du code de thermique SYRTHES et des codes de mecanique des fluides ESTET ou N3S

    Energy Technology Data Exchange (ETDEWEB)

    Peniguel, C [Electricite de France (EDF), 78 - Chatou (France). Direction des Etudes et Recherches; Rupp, I [Simulog, 78 (France)

    1998-12-31

    Thermal aspects take place in several industrial applications in which Electricite de France (EdF) is concerned. In most cases, several physical phenomena like conduction, radiation and convection are involved in thermal transfers. The aim of this paper is to present a numerical tool adapted to industrial configurations and which uses the coupling between fluid convection (resolved with ESTET in finite-volumes or with N3S in finite-elements) and radiant heat transfers between walls (resolved with SYRTHES using a radiosity method). SYRTHES manages the different thermal exchanges that can occur between fluid and solid domains thanks to an explicit iterative method. An extension of SYRTHES has been developed which allows to take into account simultaneously several fluid codes using `message passing` computer tools like Parallel Virtual Machine (PVM) and the code coupling software CALCIUM developed by the Direction of Studies and Researches (DER) of EdF. Various examples illustrate the interest of such a numerical tool. (J.S.) 12 refs.

  2. Coupling of the SYRTHES thermal code with the ESTET or N3S fluid mechanics codes; Couplage du code de thermique SYRTHES et des codes de mecanique des fluides ESTET ou N3S

    Energy Technology Data Exchange (ETDEWEB)

    Peniguel, C. [Electricite de France (EDF), 78 - Chatou (France). Direction des Etudes et Recherches; Rupp, I. [Simulog, 78 (France)

    1997-12-31

    Thermal aspects take place in several industrial applications in which Electricite de France (EdF) is concerned. In most cases, several physical phenomena like conduction, radiation and convection are involved in thermal transfers. The aim of this paper is to present a numerical tool adapted to industrial configurations and which uses the coupling between fluid convection (resolved with ESTET in finite-volumes or with N3S in finite-elements) and radiant heat transfers between walls (resolved with SYRTHES using a radiosity method). SYRTHES manages the different thermal exchanges that can occur between fluid and solid domains thanks to an explicit iterative method. An extension of SYRTHES has been developed which allows to take into account simultaneously several fluid codes using `message passing` computer tools like Parallel Virtual Machine (PVM) and the code coupling software CALCIUM developed by the Direction of Studies and Researches (DER) of EdF. Various examples illustrate the interest of such a numerical tool. (J.S.) 12 refs.

  3. CoRoT/ESTA TASK 1 and TASK 3 comparison of the internal structure and seismic properties of representative stellar models. Comparisons between the ASTEC, CESAM, CLES, GARSTEC and STAROX codes

    Science.gov (United States)

    Lebreton, Yveline; Montalbán, Josefina; Christensen-Dalsgaard, Jørgen; Roxburgh, Ian W.; Weiss, Achim

    2008-08-01

    We compare stellar models produced by different stellar evolution codes for the CoRoT/ESTA project, comparing their global quantities, their physical structure, and their oscillation properties. We discuss the differences between models and identify the underlying reasons for these differences. The stellar models are representative of potential CoRoT targets. Overall we find very good agreement between the five different codes, but with some significant deviations. We find noticeable discrepancies (though still at the per cent level) that result from the handling of the equation of state, of the opacities and of the convective boundaries. The results of our work will be helpful in interpreting future asteroseismology results from CoRoT.

  4. Algorithms and data structures for massively parallel generic adaptive finite element codes

    KAUST Repository

    Bangerth, Wolfgang

    2011-12-01

    Today\\'s largest supercomputers have 100,000s of processor cores and offer the potential to solve partial differential equations discretized by billions of unknowns. However, the complexity of scaling to such large machines and problem sizes has so far prevented the emergence of generic software libraries that support such computations, although these would lower the threshold of entry and enable many more applications to benefit from large-scale computing. We are concerned with providing this functionality for mesh-adaptive finite element computations. We assume the existence of an "oracle" that implements the generation and modification of an adaptive mesh distributed across many processors, and that responds to queries about its structure. Based on querying the oracle, we develop scalable algorithms and data structures for generic finite element methods. Specifically, we consider the parallel distribution of mesh data, global enumeration of degrees of freedom, constraints, and postprocessing. Our algorithms remove the bottlenecks that typically limit large-scale adaptive finite element analyses. We demonstrate scalability of complete finite element workflows on up to 16,384 processors. An implementation of the proposed algorithms, based on the open source software p4est as mesh oracle, is provided under an open source license through the widely used deal.II finite element software library. © 2011 ACM 0098-3500/2011/12-ART10 $10.00.

  5. ACDOS2: an improved neutron-induced dose rate code

    International Nuclear Information System (INIS)

    Lagache, J.C.

    1981-06-01

    To calculate the expected dose rate from fusion reactors as a function of geometry, composition, and time after shutdown a computer code, ACDOS2, was written, which utilizes up-to-date libraries of cross-sections and radioisotope decay data. ACDOS2 is in ANSI FORTRAN IV, in order to make it readily adaptable elsewhere

  6. ACDOS2: an improved neutron-induced dose rate code

    Energy Technology Data Exchange (ETDEWEB)

    Lagache, J.C.

    1981-06-01

    To calculate the expected dose rate from fusion reactors as a function of geometry, composition, and time after shutdown a computer code, ACDOS2, was written, which utilizes up-to-date libraries of cross-sections and radioisotope decay data. ACDOS2 is in ANSI FORTRAN IV, in order to make it readily adaptable elsewhere.

  7. An efficient chaotic source coding scheme with variable-length blocks

    International Nuclear Information System (INIS)

    Lin Qiu-Zhen; Wong Kwok-Wo; Chen Jian-Yong

    2011-01-01

    An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding. (general)

  8. Adaptation and implementation of the TRACE code for transient analysis in designs lead cooled fast reactors

    International Nuclear Information System (INIS)

    Lazaro, A.; Ammirabile, L.; Martorell, S.

    2015-01-01

    Lead-Cooled Fast Reactor (LFR) has been identified as one of promising future reactor concepts in the technology road map of the Generation IVC International Forum (GIF)as well as in the Deployment Strategy of the European Sustainable Nuclear Industrial Initiative (ESNII), both aiming at improved sustainability, enhanced safety, economic competitiveness, and proliferation resistance. This new nuclear reactor concept requires the development of computational tools to be applied in design and safety assessments to confirm improved inherent and passive safety features of this design. One approach to this issue is to modify the current computational codes developed for the simulation of Light Water Reactors towards their applicability for the new designs. This paper reports on the performed modifications of the TRACE system code to make it applicable to LFR safety assessments. The capabilities of the modified code are demonstrated on series of benchmark exercises performed versus other safety analysis codes. (Author)

  9. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments.

    Science.gov (United States)

    Santos, José; Monteagudo, Angel

    2011-02-21

    As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the fact that the best possible codes show the patterns of the

  10. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    Directory of Open Access Journals (Sweden)

    Monteagudo Ángel

    2011-02-01

    Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the

  11. Adaptive colour contrast coding in the salamander retina efficiently matches natural scene statistics.

    Directory of Open Access Journals (Sweden)

    Genadiy Vasserman

    Full Text Available The visual system continually adjusts its sensitivity to the statistical properties of the environment through an adaptation process that starts in the retina. Colour perception and processing is commonly thought to occur mainly in high visual areas, and indeed most evidence for chromatic colour contrast adaptation comes from cortical studies. We show that colour contrast adaptation starts in the retina where ganglion cells adjust their responses to the spectral properties of the environment. We demonstrate that the ganglion cells match their responses to red-blue stimulus combinations according to the relative contrast of each of the input channels by rotating their functional response properties in colour space. Using measurements of the chromatic statistics of natural environments, we show that the retina balances inputs from the two (red and blue stimulated colour channels, as would be expected from theoretical optimal behaviour. Our results suggest that colour is encoded in the retina based on the efficient processing of spectral information that matches spectral combinations in natural scenes on the colour processing level.

  12. Color Image Authentication and Recovery via Adaptive Encoding

    Directory of Open Access Journals (Sweden)

    Chun-Hung Chen

    2014-01-01

    Full Text Available We describe an authentication and recovery scheme for color image protection based on adaptive encoding. The image blocks are categorized based on their contents and different encoding schemes are applied according to their types. Such adaptive encoding results in better image quality and more robust image authentication. The approximations of the luminance and chromatic channels are carefully calculated, and for the purpose of reducing the data size, differential coding is used to encode the channels with variable size according to the characteristic of the block. The recovery data which represents the approximation and the detail of the image is embedded for data protection. The necessary data is well protected by using error correcting coding and duplication. The experimental results demonstrate that our technique is able to identify and localize image tampering, while preserving high quality for both watermarked and recovered images.

  13. Multi keno-VAX a modified version of the reactor computer code Multi keno-2

    Energy Technology Data Exchange (ETDEWEB)

    Imam, M [National center for nuclear safety and radiation control, atomic energy authority, Cairo, (Egypt)

    1995-10-01

    The reactor computer code Multi keno-2 is developed in Japan from the original Monte Carlo Keno-IV. By applications of this code on some real problems, fatal errors were detected. These errors are related to the restart option in the code. The restart option is essential for solving time-consuming problems on mini-computer like VAX-6320. These errors were corrected and other modifications were carried out in the code. Because of these modifications new input data description was written for the code. Thus a new VAX/VMS version for the program was developed which is also adaptable for mini-mainframes. This new developed program, called Multi keno-VAX is accepted in the Nea-IAEA data bank and is added to its international computer codes library. 1 fig.

  14. Multi keno-VAX a modified version of the reactor computer code Multi keno-2

    International Nuclear Information System (INIS)

    Imam, M.

    1995-01-01

    The reactor computer code Multi keno-2 is developed in Japan from the original Monte Carlo Keno-IV. By applications of this code on some real problems, fatal errors were detected. These errors are related to the restart option in the code. The restart option is essential for solving time-consuming problems on mini-computer like VAX-6320. These errors were corrected and other modifications were carried out in the code. Because of these modifications new input data description was written for the code. Thus a new VAX/VMS version for the program was developed which is also adaptable for mini-mainframes. This new developed program, called Multi keno-VAX is accepted in the Nea-IAEA data bank and is added to its international computer codes library. 1 fig

  15. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Science.gov (United States)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  16. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Directory of Open Access Journals (Sweden)

    DeTar Carleton

    2018-01-01

    Full Text Available With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  17. Comparison of two apheresis systems for the collection of CD14+ cells intended to be used in dendritic cell culture.

    Science.gov (United States)

    Strasser, Erwin F; Berger, Thomas G; Weisbach, Volker; Zimmermann, Robert; Ringwald, Jürgen; Schuler-Thurner, Beatrice; Zingsem, Jürgen; Eckstein, Reinhold

    2003-09-01

    Monocytes collected by leukapheresis are increasingly used for dendritic cell (DC) culture in cell factories suitable for DC vaccination in cancer. Using modified MNC programs on two apheresis systems (Cobe Spectra and Fresenius AS.TEC204), leukapheresis components collected from 84 patients with metastatic malignant melanoma and from 31 healthy male donors were investigated. MNCs, monocytes, RBCs, and platelets (PLTs) in donors and components were analyzed by cell counters, WBC differential counts, and flow cytometry. In 5-L collections, Astec showed better results regarding monocyte collection rates (11.0 vs. 7.4 x 10(6)/min, p = 0.04) and efficiencies (collection efficiency, 51.9 vs. 31.9%; p Astec components contained high residual RBCs. Compared to components with low residual PLTs, high PLT concentration resulted in higher monocyte loss (48 vs. 20%, p Astec is more efficient in 5-L MNC collections compared to the Spectra. Components with high residual PLTs result in high MNC loss by purification procedures. Thus, optimizing MNC programs is essential to obtain components with high MNC yields and low residual cells as prerequisite for high DC yields.

  18. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  19. Study to Analyze the Acquisition of Automatic Test Equipment (ATE) Systems. Data Sequence Number A003

    Science.gov (United States)

    1973-12-27

    Systems Test Equipment Comparator, ASTEC ) at NAEC can provide a very accurate Ion a pin by pin basis) match between the UUT and ATE in their data bank...In addition, abbreviated summary data on the ATE is also available to users. ASTEC will also file the UUT data as part of its data bank so that

  20. Prototype Protein-Based Three-Dimensional Memory

    Science.gov (United States)

    2003-01-01

    Power Supply Box has an on/off switch, fuses and an indicator light. Within the box are two power supplies. The first supply ( ASTEC ATV251) provides...two laser modules. The second supply ( ASTEC ATV 12N3.4) provides +12V at 3.4A. This supply is used for the Interface Board, the 3-D Motion Interface

  1. A computerized energy systems code and information library at Soreq

    Energy Technology Data Exchange (ETDEWEB)

    Silverman, I; Shapira, M; Caner, D; Sapier, D [Israel Atomic Energy Commission, Yavne (Israel). Soreq Nuclear Research Center

    1996-12-01

    In the framework of the contractual agreement between the Ministry of Energy and Infrastructure and the Division of Nuclear Engineering of the Israel Atomic Energy Commission, both Soreq-NRC and Ben-Gurion University have agreed to establish, in 1991, a code center. This code center contains a library of computer codes and relevant data, with particular emphasis on nuclear power plant research and development support. The code center maintains existing computer codes and adapts them to the ever changing computing environment, keeps track of new code developments in the field of nuclear engineering, and acquires the most recent revisions of computer codes of interest. An attempt is made to collect relevant codes developed in Israel and to assure that proper documentation and application instructions are available. En addition to computer programs, the code center collects sample problems and international benchmarks to verify the codes and their applications to various areas of interest to nuclear power plant engineering and safety evaluation. Recently, the reactor simulation group at Soreq acquired, using funds provided by the Ministry of Energy and Infrastructure, a PC work station operating under a Linux operating system to give users of the library an easy on-line way to access resources available at the library. These resources include the computer codes and their documentation, reports published by the reactor simulation group, and other information databases available at Soreq. Registered users set a communication line, through a modem, between their computer and the new workstation at Soreq and use it to download codes and/or information or to solve their problems, using codes from the library, on the computer at Soreq (authors).

  2. A computerized energy systems code and information library at Soreq

    International Nuclear Information System (INIS)

    Silverman, I.; Shapira, M.; Caner, D.; Sapier, D.

    1996-01-01

    In the framework of the contractual agreement between the Ministry of Energy and Infrastructure and the Division of Nuclear Engineering of the Israel Atomic Energy Commission, both Soreq-NRC and Ben-Gurion University have agreed to establish, in 1991, a code center. This code center contains a library of computer codes and relevant data, with particular emphasis on nuclear power plant research and development support. The code center maintains existing computer codes and adapts them to the ever changing computing environment, keeps track of new code developments in the field of nuclear engineering, and acquires the most recent revisions of computer codes of interest. An attempt is made to collect relevant codes developed in Israel and to assure that proper documentation and application instructions are available. En addition to computer programs, the code center collects sample problems and international benchmarks to verify the codes and their applications to various areas of interest to nuclear power plant engineering and safety evaluation. Recently, the reactor simulation group at Soreq acquired, using funds provided by the Ministry of Energy and Infrastructure, a PC work station operating under a Linux operating system to give users of the library an easy on-line way to access resources available at the library. These resources include the computer codes and their documentation, reports published by the reactor simulation group, and other information databases available at Soreq. Registered users set a communication line, through a modem, between their computer and the new workstation at Soreq and use it to download codes and/or information or to solve their problems, using codes from the library, on the computer at Soreq (authors)

  3. A low-delay 8 Kb/s backward-adaptive CELP coder

    Science.gov (United States)

    Neumeyer, L. G.; Leblanc, W. P.; Mahmoud, S. A.

    1990-01-01

    Code excited linear prediction coding is an efficient technique for compressing speech sequences. Communications quality of speech can be obtained at bit rates below 8 Kb/s. However, relatively large coding delays are necessary to buffer the input speech in order to perform the LPC analysis. A low delay 8 Kb/s CELP coder is introduced in which the short term predictor is based on past synthesized speech. A new distortion measure that improves the tracking of the formant filter is discussed. Formal listening tests showed that the performance of the backward adaptive coder is almost as good as the conventional CELP coder.

  4. ertCPN: The adaptations of the coloured Petri-Net theory for real-time embedded system modeling and automatic code generation

    Directory of Open Access Journals (Sweden)

    Wattanapong Kurdthongmee

    2003-05-01

    Full Text Available A real-time system is a computer system that monitors or controls an external environment. The system must meet various timing and other constraints that are imposed on it by the real-time behaviour of the external world. One of the differences between a real-time and a conventional software is that a real-time program must be both logically and temporally correct. To successfully design and implement a real-time system, some analysis is typically done to assure that requirements or designs are consistent and that they satisfy certain desirable properties that may not be immediately obvious from specification. Executable specifications, prototypes and simulation are particularly useful in real-time systems for debugging specifications. In this paper, we propose the adaptations to the coloured Petri-net theory to ease the modeling, simulation and code generation process of an embedded, microcontroller-based, real-time system. The benefits of the proposed approach are demonstrated by use of our prototype software tool called ENVisAge (an Extended Coloured Petri-Net Based Visual Application Generator Tool.

  5. How to Crack the Sugar Code.

    Science.gov (United States)

    Gabius, H-J

    2017-01-01

    The known ubiquitous presence of glycans fulfils an essential prerequisite for fundamental roles in cell sociology. Since carbohydrates are chemically predestined to form biochemical messages of a maximum of structural diversity in a minimum of space, coding of biological information by sugars is the reason for the broad occurrence of cellular glycoconjugates. Their glycans originate from sophisticated enzymatic assembly and dynamically adaptable remodelling. These signals are read and translated into effects by receptors (lectins). The functional pairing between lectins and their counterreceptor(s) is highly specific, often orchestrated by intimate co-regulation of the receptor, the cognate glycan and the bioactive scaffold (e.g., an integrin). Bottom-up approaches, teaming up synthetic and supramolecular chemistry to prepare fully programmable nanoparticles as binding partners with systematic network analysis of lectins and rational design of variants, enable us to delineate the rules of the sugar code.

  6. Subband coding of digital audio signals without loss of quality

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.; Breeuwer, Marcel; van de Waal, Robbert

    1989-01-01

    A subband coding system for high quality digital audio signals is described. To achieve low bit rates at a high quality level, it exploits the simultaneous masking effect of the human ear. It is shown how this effect can be used in an adaptive bit-allocation scheme. The proposed approach has been

  7. Axisym finite element code: modifications for pellet-cladding mechanical interaction analysis

    International Nuclear Information System (INIS)

    Engelman, G.P.

    1978-10-01

    Local strain concentrations in nuclear fuel rods are known to be potential sites for failure initiation. Assessment of such strain concentrations requires a two-dimensional analysis of stress and strain in both the fuel and the cladding during pellet-cladding mechanical interaction. To provide such a capability in the FRAP (Fuel Rod Analysis Program) codes, the AXISYM code (a small finite element program developed at the Idaho National Engineering Laboratory) was modified to perform a detailed fuel rod deformation analysis. This report describes the modifications which were made to the AXISYM code to adapt it for fuel rod analysis and presents comparisons made between the two-dimensional AXISYM code and the FRACAS-II code. FRACAS-II is the one-dimensional (generalized plane strain) fuel rod mechanical deformation subcode used in the FRAP codes. Predictions of these two codes should be comparable away from the fuel pellet free ends if the state of deformation at the pellet midplane is near that of generalized plane strain. The excellent agreement obtained in these comparisons checks both the correctness of the AXISYM code modifications as well as the validity of the assumption of generalized plane strain upon which the FRACAS-II subcode is based

  8. Sobol indices for dimension adaptivity in sparse grids

    NARCIS (Netherlands)

    Dwight, R.P.; Desmedt, S.G.L.; Shoeibi Omrani, P.

    2016-01-01

    Propagation of random variables through computer codes of many inputs is primarily limited by computational expense. The use of sparse grids mitigates these costs somewhat; here we show how Sobol indices can be used to perform dimension adaptivity to mitigate them further. The method is compared to

  9. How to review 4 million lines of ATLAS code

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00226135; The ATLAS collaboration; Lampl, Walter

    2017-01-01

    As the ATLAS Experiment prepares to move to a multi-threaded framework (AthenaMT) for Run3, we are faced with the problem of how to migrate 4 million lines of C++ source code. This code has been written over the past 15 years and has often been adapted, re-written or extended to the changing requirements and circumstances of LHC data taking. The code was developed by different authors, many of whom are no longer active, and under the deep assumption that processing ATLAS data would be done in a serial fashion. In order to understand the scale of the problem faced by the ATLAS software community, and to plan appropriately the significant efforts posed by the new AthenaMT framework, ATLAS embarked on a wide ranging review of our offline code, covering all areas of activity: event generation, simulation, trigger, reconstruction. We discuss the difficulties in even logistically organising such reviews in an already busy community, how to examine areas in sufficient depth to learn key areas in need of upgrade, yet...

  10. How To Review 4 Million Lines of ATLAS Code

    CERN Document Server

    Stewart, Graeme; The ATLAS collaboration

    2016-01-01

    As the ATLAS Experiment prepares to move to a multi-threaded framework (AthenaMT) for Run3, we are faced with the problem of how to migrate 4 million lines of C++ source code. This code has been written over the past 15 years and has often been adapted, re-written or extended to the changing requirements and circumstances of LHC data taking. The code was developed by different authors, many of whom are no longer active, and under the deep assumption that processing ATLAS data would be done in a serial fashion. In order to understand the scale of the problem faced by the ATLAS software community, and to plan appropriately the significant efforts posed by the new AthenaMT framework, ATLAS embarked on a wide ranging review of our offline code, covering all areas of activity: event generation, simulation, trigger, reconstruction. We discuss the difficulties in even logistically organising such reviews in an already busy community, how to examine areas in sufficient depth to learn key areas in need of upgrade, yet...

  11. Extraordinarily Adaptive Properties of the Genetically Encoded Amino Acids

    Science.gov (United States)

    Ilardo, Melissa; Meringer, Markus; Freeland, Stephen; Rasulev, Bakhtiyor; Cleaves II, H. James

    2015-01-01

    Using novel advances in computational chemistry, we demonstrate that the set of 20 genetically encoded amino acids, used nearly universally to construct all coded terrestrial proteins, has been highly influenced by natural selection. We defined an adaptive set of amino acids as one whose members thoroughly cover relevant physico-chemical properties, or “chemistry space.” Using this metric, we compared the encoded amino acid alphabet to random sets of amino acids. These random sets were drawn from a computationally generated compound library containing 1913 alternative amino acids that lie within the molecular weight range of the encoded amino acids. Sets that cover chemistry space better than the genetically encoded alphabet are extremely rare and energetically costly. Further analysis of more adaptive sets reveals common features and anomalies, and we explore their implications for synthetic biology. We present these computations as evidence that the set of 20 amino acids found within the standard genetic code is the result of considerable natural selection. The amino acids used for constructing coded proteins may represent a largely global optimum, such that any aqueous biochemistry would use a very similar set. PMID:25802223

  12. Extraordinarily adaptive properties of the genetically encoded amino acids.

    Science.gov (United States)

    Ilardo, Melissa; Meringer, Markus; Freeland, Stephen; Rasulev, Bakhtiyor; Cleaves, H James

    2015-03-24

    Using novel advances in computational chemistry, we demonstrate that the set of 20 genetically encoded amino acids, used nearly universally to construct all coded terrestrial proteins, has been highly influenced by natural selection. We defined an adaptive set of amino acids as one whose members thoroughly cover relevant physico-chemical properties, or "chemistry space." Using this metric, we compared the encoded amino acid alphabet to random sets of amino acids. These random sets were drawn from a computationally generated compound library containing 1913 alternative amino acids that lie within the molecular weight range of the encoded amino acids. Sets that cover chemistry space better than the genetically encoded alphabet are extremely rare and energetically costly. Further analysis of more adaptive sets reveals common features and anomalies, and we explore their implications for synthetic biology. We present these computations as evidence that the set of 20 amino acids found within the standard genetic code is the result of considerable natural selection. The amino acids used for constructing coded proteins may represent a largely global optimum, such that any aqueous biochemistry would use a very similar set.

  13. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  14. Detecting consistent patterns of directional adaptation using differential selection codon models.

    Science.gov (United States)

    Parto, Sahar; Lartillot, Nicolas

    2017-06-23

    Phylogenetic codon models are often used to characterize the selective regimes acting on protein-coding sequences. Recent methodological developments have led to models explicitly accounting for the interplay between mutation and selection, by modeling the amino acid fitness landscape along the sequence. However, thus far, most of these models have assumed that the fitness landscape is constant over time. Fluctuations of the fitness landscape may often be random or depend on complex and unknown factors. However, some organisms may be subject to systematic changes in selective pressure, resulting in reproducible molecular adaptations across independent lineages subject to similar conditions. Here, we introduce a codon-based differential selection model, which aims to detect and quantify the fine-grained consistent patterns of adaptation at the protein-coding level, as a function of external conditions experienced by the organism under investigation. The model parameterizes the global mutational pressure, as well as the site- and condition-specific amino acid selective preferences. This phylogenetic model is implemented in a Bayesian MCMC framework. After validation with simulations, we applied our method to a dataset of HIV sequences from patients with known HLA genetic background. Our differential selection model detects and characterizes differentially selected coding positions specifically associated with two different HLA alleles. Our differential selection model is able to identify consistent molecular adaptations as a function of repeated changes in the environment of the organism. These models can be applied to many other problems, ranging from viral adaptation to evolution of life-history strategies in plants or animals.

  15. General purpose code for Monte Carlo simulations

    International Nuclear Information System (INIS)

    Wilcke, W.W.

    1983-01-01

    A general-purpose computer called MONTHY has been written to perform Monte Carlo simulations of physical systems. To achieve a high degree of flexibility the code is organized like a general purpose computer, operating on a vector describing the time dependent state of the system under simulation. The instruction set of the computer is defined by the user and is therefore adaptable to the particular problem studied. The organization of MONTHY allows iterative and conditional execution of operations

  16. Modeling report of DYMOND code (DUPIC version)

    International Nuclear Information System (INIS)

    Park, Joo Hwan; Yacout, Abdellatif M.

    2003-04-01

    The DYMOND code employs the ITHINK dynamic modeling platform to assess the 100-year dynamic evolution scenarios for postulated global nuclear energy parks. Firstly, DYMOND code has been developed by ANL(Argonne National Laboratory) to perform the fuel cycle analysis of LWR once-through and LWR-FBR mixed plant. Since the extensive application of DYMOND code has been requested, the first version of DYMOND has been modified to adapt the DUPIC, MSR and RTF fuel cycle. DYMOND code is composed of three parts; the source language platform, input supply and output. But those platforms are not clearly distinguished. This report described all the equations which were modeled in the modified DYMOND code (which is called as DYMOND-DUPIC version). It divided into five parts;Part A deals model in reactor history which is included amount of the requested fuels and spent fuels. Part B aims to describe model of fuel cycle about fuel flow from the beginning to the end of fuel cycle. Part C is for model in re-processing which is included recovery of burned uranium, plutonium, minor actinide and fission product as well as the amount of spent fuels in storage and disposal. Part D is for model in other fuel cycle which is considered the thorium fuel cycle for MSR and RTF reactor. Part E is for model in economics. This part gives all the information of cost such as uranium mining cost, reactor operating cost, fuel cost etc

  17. Modeling report of DYMOND code (DUPIC version)

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan [KAERI, Taejon (Korea, Republic of); Yacout, Abdellatif M [Argonne National Laboratory, Ilinois (United States)

    2003-04-01

    The DYMOND code employs the ITHINK dynamic modeling platform to assess the 100-year dynamic evolution scenarios for postulated global nuclear energy parks. Firstly, DYMOND code has been developed by ANL(Argonne National Laboratory) to perform the fuel cycle analysis of LWR once-through and LWR-FBR mixed plant. Since the extensive application of DYMOND code has been requested, the first version of DYMOND has been modified to adapt the DUPIC, MSR and RTF fuel cycle. DYMOND code is composed of three parts; the source language platform, input supply and output. But those platforms are not clearly distinguished. This report described all the equations which were modeled in the modified DYMOND code (which is called as DYMOND-DUPIC version). It divided into five parts;Part A deals model in reactor history which is included amount of the requested fuels and spent fuels. Part B aims to describe model of fuel cycle about fuel flow from the beginning to the end of fuel cycle. Part C is for model in re-processing which is included recovery of burned uranium, plutonium, minor actinide and fission product as well as the amount of spent fuels in storage and disposal. Part D is for model in other fuel cycle which is considered the thorium fuel cycle for MSR and RTF reactor. Part E is for model in economics. This part gives all the information of cost such as uranium mining cost, reactor operating cost, fuel cost etc.

  18. Aeroelastic code development activities in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Wright, A.D. [National Renewable Energy Lab., Golden, Colorado (United States)

    1996-09-01

    Designing wind turbines to be fatigue resistant and to have long lifetimes at minimal cost is a major goal of the federal wind program and the wind industry in the United States. To achieve this goal, we must be able to predict critical loads for a wide variety of different wind turbines operating under extreme conditions. The codes used for wind turbine dynamic analysis must be able to analyze a wide range of different wind turbine configurations as well as rapidly predict the loads due to turbulent wind inflow with a minimal set of degrees of freedom. Code development activities in the US have taken a two-pronged approach in order to satisfy both of these criteria: (1) development of a multi-purpose code which can be used to analyze a wide variety of wind turbine configurations without having to develop new equations of motion with each configuration change, and (2) development of specialized codes with minimal sets of specific degrees of freedom for analysis of two- and three-bladed horizontal axis wind turbines and calculation of machine loads due to turbulent inflow. In the first method we have adapted a commercial multi-body dynamics simulation package for wind turbine analysis. In the second approach we are developing specialized codes with limited degrees of freedom, usually specified in the modal domain. This paper will summarize progress to date in the development, validation, and application of these codes. (au) 13 refs.

  19. An object-oriented decomposition of the adaptive-hp finite element method

    Energy Technology Data Exchange (ETDEWEB)

    Wiley, J.C.

    1994-12-13

    Adaptive-hp methods are those which use a refinement control strategy driven by a local error estimate to locally modify the element size, h, and polynomial order, p. The result is an unstructured mesh in which each node may be associated with a different polynomial order and which generally require complex data structures to implement. Object-oriented design strategies and languages which support them, e.g., C++, help control the complexity of these methods. Here an overview of the major classes and class structure of an adaptive-hp finite element code is described. The essential finite element structure is described in terms of four areas of computation each with its own dynamic characteristics. Implications of converting the code for a distributed-memory parallel environment are also discussed.

  20. Amino acid fermentation at the origin of the genetic code.

    Science.gov (United States)

    de Vladar, Harold P

    2012-02-10

    There is evidence that the genetic code was established prior to the existence of proteins, when metabolism was powered by ribozymes. Also, early proto-organisms had to rely on simple anaerobic bioenergetic processes. In this work I propose that amino acid fermentation powered metabolism in the RNA world, and that this was facilitated by proto-adapters, the precursors of the tRNAs. Amino acids were used as carbon sources rather than as catalytic or structural elements. In modern bacteria, amino acid fermentation is known as the Stickland reaction. This pathway involves two amino acids: the first undergoes oxidative deamination, and the second acts as an electron acceptor through reductive deamination. This redox reaction results in two keto acids that are employed to synthesise ATP via substrate-level phosphorylation. The Stickland reaction is the basic bioenergetic pathway of some bacteria of the genus Clostridium. Two other facts support Stickland fermentation in the RNA world. First, several Stickland amino acid pairs are synthesised in abiotic amino acid synthesis. This suggests that amino acids that could be used as an energy substrate were freely available. Second, anticodons that have complementary sequences often correspond to amino acids that form Stickland pairs. The main hypothesis of this paper is that pairs of complementary proto-adapters were assigned to Stickland amino acids pairs. There are signatures of this hypothesis in the genetic code. Furthermore, it is argued that the proto-adapters formed double strands that brought amino acid pairs into proximity to facilitate their mutual redox reaction, structurally constraining the anticodon pairs that are assigned to these amino acid pairs. Significance tests which randomise the code are performed to study the extent of the variability of the energetic (ATP) yield. Random assignments can lead to a substantial yield of ATP and maintain enough variability, thus selection can act and refine the assignments

  1. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  2. The Interpretation Of Speech Code In A Communication Ethnographic Context For Outsider Students Of Graduate Communication Science Universitas Sumatera Utara In Medan

    Directory of Open Access Journals (Sweden)

    Fauzi Eka Putra

    2017-06-01

    Full Text Available Interpreting the typical Medan speech code is something unique and distinctive which could create confusion for the outsider students because of the speech code similarities and differences in Medan. Therefore the graduate students of communication science Universitas Sumatera Utara whose originated from outside of North Sumatera needs to learn comprehend and aware in order to perform effective communication. The purpose of this research is to discover how the interpretation of speech code for the graduate students of communication science Universitas Sumatera Utara whose originated from outside of North Sumatera in adapting themselves in Medan. This research uses qualitative method with the study of ethnography and acculturation communication. The subject of this research is the graduate students of communication science Universitas Sumatera Utara whose originated from outside of North Sumatera in adapting themselves in Medan. Data were collected through interviews observation and documentation. The conclusion of this research shows that speech code interpretation by students from outside of North Sumatera in adapting themselves in Medan leads to an acculturation process of assimilation and integration.

  3. Adaption, validation and application of advanced codes with 3-dimensional neutron kinetics for accident analysis calculations - STC with Bulgaria

    International Nuclear Information System (INIS)

    Grundmann, U.; Kliem, S.; Mittag, S.; Rohde, U.; Seidel, A.; Panayotov, D.; Ilieva, B.

    2001-08-01

    In the frame of a project on scientific-technical co-operation funded by BMBF/BMWi, the program code DYN3D and the coupled code ATHLET-DYN3D have been transferred to the Institute for Nuclear Research and Nuclear Energy (INRNE) Sofia. The coupled code represents an implementation of the 3D core model DYN3D developed by FZR into the GRS thermal-hydraulics code system ATHLET. For the purpose of validation of these codes, a measurement data base about a start-up experiment obtained at the unit 6 of Kozloduy NPP (VVER-1000/V-320) has been generated. The results of performed validation calculations were compared with measurement values from the data base. A simplified model for estimation of cross flow mixing between fuel assemblies has been implemented into the program code DYN3D by Bulgarian experts. Using this cross flow model, transient processes with asymmetrical boundary conditions can be analysed more realistic. The validation of the implemented model were performed with help of comparison calculations between modified DYD3D code and thermal-hydraulics code COBRA-4I, and also on the base of the collected measurement data from Kozloduy NPP. (orig.) [de

  4. The Digital Forensics and Security Challenge of QR Codes

    Directory of Open Access Journals (Sweden)

    Nik Thompson

    2013-06-01

    Full Text Available The disciplines of digital forensics and IT security must adapt to new technologies and methods of interaction with those technologies.  New technologies present both challenges and opportunities for providing evidence for digital forensics investigations.  These may be in the form of new devices such as smartphones or new methods of sharing information, such as social networks.  One such rapidly emerging interaction technology is the use of Quick Response (QR codes.  These offer a physical mechanism for quick access to web sites for advertising and social interaction.  This paper argues that the common implementation of QR codes potentially presents security issues which must be considered.  It analyzes potential privacy problems with QR codes and studies a range of devices as they may have implications for the process of evidence collection and analysis.

  5. Reversible wavelet filter banks with side informationless spatially adaptive low-pass filters

    Science.gov (United States)

    Abhayaratne, Charith

    2011-07-01

    Wavelet transforms that have an adaptive low-pass filter are useful in applications that require the signal singularities, sharp transitions, and image edges to be left intact in the low-pass signal. In scalable image coding, the spatial resolution scalability is achieved by reconstructing the low-pass signal subband, which corresponds to the desired resolution level, and discarding other high-frequency wavelet subbands. In such applications, it is vital to have low-pass subbands that are not affected by smoothing artifacts associated with low-pass filtering. We present the mathematical framework for achieving 1-D wavelet transforms that have a spatially adaptive low-pass filter (SALP) using the prediction-first lifting scheme. The adaptivity decisions are computed using the wavelet coefficients, and no bookkeeping is required for the perfect reconstruction. Then, 2-D wavelet transforms that have a spatially adaptive low-pass filter are designed by extending the 1-D SALP framework. Because the 2-D polyphase decompositions are used in this case, the 2-D adaptivity decisions are made nonseparable as opposed to the separable 2-D realization using 1-D transforms. We present examples using the 2-D 5/3 wavelet transform and their lossless image coding and scalable decoding performances in terms of quality and resolution scalability. The proposed 2-D-SALP scheme results in better performance compared to the existing adaptive update lifting schemes.

  6. Design strategies for irregularly adapting parallel applications

    International Nuclear Information System (INIS)

    Oliker, Leonid; Biswas, Rupak; Shan, Hongzhang; Sing, Jaswinder Pal

    2000-01-01

    Achieving scalable performance for dynamic irregular applications is eminently challenging. Traditional message-passing approaches have been making steady progress towards this goal; however, they suffer from complex implementation requirements. The use of a global address space greatly simplifies the programming task, but can degrade the performance of dynamically adapting computations. In this work, we examine two major classes of adaptive applications, under five competing programming methodologies and four leading parallel architectures. Results indicate that it is possible to achieve message-passing performance using shared-memory programming techniques by carefully following the same high level strategies. Adaptive applications have computational work loads and communication patterns which change unpredictably at runtime, requiring dynamic load balancing to achieve scalable performance on parallel machines. Efficient parallel implementations of such adaptive applications are therefore a challenging task. This work examines the implementation of two typical adaptive applications, Dynamic Remeshing and N-Body, across various programming paradigms and architectural platforms. We compare several critical factors of the parallel code development, including performance, programmability, scalability, algorithmic development, and portability

  7. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  8. Sample Adaptive Offset Optimization in HEVC

    Directory of Open Access Journals (Sweden)

    Yang Zhang

    2014-11-01

    Full Text Available As the next generation of video coding standard, High Efficiency Video Coding (HEVC adopted many useful tools to improve coding efficiency. Sample Adaptive Offset (SAO, is a technique to reduce sample distortion by providing offsets to pixels in in-loop filter. In SAO, pixels in LCU are classified into several categories, then categories and offsets are given based on Rate-Distortion Optimization (RDO of reconstructed pixels in a Largest Coding Unit (LCU. Pixels in a LCU are operated by the same SAO process, however, transform and inverse transform makes the distortion of pixels in Transform Unit (TU edge larger than the distortion inside TU even after deblocking filtering (DF and SAO. And the categories of SAO can also be refined, since it is not proper for many cases. This paper proposed a TU edge offset mode and a category refinement for SAO in HEVC. Experimental results shows that those two kinds of optimization gets -0.13 and -0.2 gain respectively compared with the SAO in HEVC. The proposed algorithm which using the two kinds of optimization gets -0.23 gain on BD-rate compared with the SAO in HEVC which is a 47 % increase with nearly no increase on coding time.

  9. Development and validation of computer codes for analysis of PHWR containment behaviour

    International Nuclear Information System (INIS)

    Markandeya, S.G.; Haware, S.K.; Ghosh, A.K.; Venkat Raj, V.

    1997-01-01

    In order to ensure that the design intent of the containment of Indian Pressurised Heavy Water Reactors (IPHWRs) is met, both analytical and experimental studies are being pursued at BARC. As a part of analytical studies, computer codes for predicting the behaviour of containment under various accident scenarios are developed/adapted. These include codes for predicting 1) pressure, temperature transients in the containment following either Loss of Coolant Accident (LOCA) or Main Steam Line Break (MSLB), 2) hydrogen behaviour in respect of its distribution, combustion and the performance of proposed mitigation systems, and 3) behaviour of fission product aerosols in the piping circuits of the primary heat transport system and in the containment. All these codes have undergone thorough validation using data obtained from in-house test facilities or from international sources. Participation in the International Standard Problem (ISP) exercises has also helped in validation of the codes. The present paper briefly describes some of these codes and the various exercises performed for their validation. (author)

  10. An Efficient Code-Timing Estimator for DS-CDMA Systems over Resolvable Multipath Channels

    Directory of Open Access Journals (Sweden)

    Jian Li

    2005-04-01

    Full Text Available We consider the problem of training-based code-timing estimation for the asynchronous direct-sequence code-division multiple-access (DS-CDMA system. We propose a modified large-sample maximum-likelihood (MLSML estimator that can be used for the code-timing estimation for the DS-CDMA systems over the resolvable multipath channels in closed form. Simulation results show that MLSML can be used to provide a high correct acquisition probability and a high estimation accuracy. Simulation results also show that MLSML can have very good near-far resistant capability due to employing a data model similar to that for adaptive array processing where strong interferences can be suppressed.

  11. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  12. Radiation transport code with adaptive Mesh Refinement: acceleration techniques and applications

    International Nuclear Information System (INIS)

    Velarde, Pedro; Garcia-Fernaandez, Carlos; Portillo, David; Barbas, Alfonso

    2011-01-01

    We present a study of acceleration techniques for solving Sn radiation transport equations with Adaptive Mesh Refinement (AMR). Both DSA and TSA are considered, taking into account the influence of the interaction between different levels of the mesh structure and the order of approximation in angle. A Hybrid method is proposed in order to obtain better convergence rate and lower computer times. Some examples are presented relevant to ICF and X ray secondary sources. (author)

  13. Failure to adapt infrastructure: is legal liability lurking for infrastructure stakeholders

    International Nuclear Information System (INIS)

    Gherbaz, S.

    2009-01-01

    'Full text:' Very little attention has been paid to potential legal liability for failing to adapt infrastructure to climate change-related risk. Amendments to laws, building codes and standards to take into account the potential impact of climate change on infrastructure assets are still at least some time away. Notwithstanding that amendments are still some time away, there is a real risk to infrastructure stakeholders for failing to adapt. The legal framework in Canada currently permits a court, in the right circumstances, to find certain infrastructure stakeholders legally liable for personal injury and property damage suffered by third parties as a result of climate change effects. This presentation will focus on legal liability of owners (governmental and private sector), engineers, architects and contractors for failing to adapt infrastructure assets to climate change risk. It will answer commonly asked questions such as: Can I avoid liability by complying with existing laws, codes and standards? Do engineers and architects have a duty to warn owners that existing laws, codes and standards do not, in certain circumstances, adequately take into account the impact of climate change-related risks on an infrastructure asset? And do professional liability insurance policies commonly maintained by architects, engineers and other design professionals provide coverage for a design professional's failure to take into account climate change-related risks?. (author)

  14. Adaptation of multidimensional group particle tracking and particle wall-boundary condition model to the FDNS code

    Science.gov (United States)

    Chen, Y. S.; Farmer, R. C.

    1992-01-01

    A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.

  15. Coding efficiency of AVS 2.0 for CBAC and CABAC engines

    Science.gov (United States)

    Cui, Jing; Choi, Youngkyu; Chae, Soo-Ik

    2015-12-01

    In this paper we compare the coding efficiency of AVS 2.0[1] for engines of the Context-based Binary Arithmetic Coding (CBAC)[2] in the AVS 2.0 and the Context-Adaptive Binary Arithmetic Coder (CABAC)[3] in the HEVC[4]. For fair comparison, the CABAC is embedded in the reference code RD10.1 because the CBAC is in the HEVC in our previous work[5]. The rate estimation table is employed only for RDOQ in the RD code. To reduce the computation complexity of the video encoder, therefore we modified the RD code so that the rate estimation table is employed for all RDO decision. Furthermore, we also simplify the complexity of rate estimation table by reducing the bit depth of its fractional part to 2 from 8. The simulation result shows that the CABAC has the BD-rate loss of about 0.7% compared to the CBAC. It seems that the CBAC is a little more efficient than that the CABAC in the AVS 2.0.

  16. TOUTATIS: A radio frequency quadrupole code

    Directory of Open Access Journals (Sweden)

    Romuald Duperrier

    2000-12-01

    Full Text Available A cw high power linear accelerator can only work with very low particle losses and structure activation. At low energy, the radio frequency quadrupole (RFQ is an accelerator element that is very sensitive to losses. To design this structure, a good understanding of the beam dynamics is required. Generally, the reference code PARMTEQM is enough to design the accelerator. TOUTATIS has been written with the goals of cross-checking results and obtaining more reliable dynamics. This paper relates the different numerical methods used in the code. It is time based, using multigrids methods and adaptive mesh for a fine description of the forces without being time consuming. The field is calculated through a Poisson solver and the vanes are fully described, allowing it to properly simulate the coupling gaps and the RFQs extremities. Theoretical and experimental tests are also described and show a good agreement between simulations and reference cases.

  17. BALDUR: a one-dimensional plasma transport code

    International Nuclear Information System (INIS)

    Singer, C.E.; Post, D.E.; Mikkelsen, D.R.

    1986-07-01

    The purpose of BALDUR is to calculate the evolution of plasma parameters in an MHD equilibrium which can be approximated by concentric circular flux surfaces. Transport of up to six species of ionized particles, of electron and ion energy, and of poloidal magnetic flux is computed. A wide variety of source terms are calculated including those due to neutral gas, fusion, and auxiliary heating. The code is primarily designed for modeling tokamak plasmas but could be adapted to other toroidal confinement systems

  18. Error Concealment using Neural Networks for Block-Based Image Coding

    Directory of Open Access Journals (Sweden)

    M. Mokos

    2006-06-01

    Full Text Available In this paper, a novel adaptive error concealment (EC algorithm, which lowers the requirements for channel coding, is proposed. It conceals errors in block-based image coding systems by using neural network. In this proposed algorithm, only the intra-frame information is used for reconstruction of the image with separated damaged blocks. The information of pixels surrounding a damaged block is used to recover the errors using the neural network models. Computer simulation results show that the visual quality and the MSE evaluation of a reconstructed image are significantly improved using the proposed EC algorithm. We propose also a simple non-neural approach for comparison.

  19. The Premar Code for the Monte Carlo Simulation of Radiation Transport In the Atmosphere

    International Nuclear Information System (INIS)

    Cupini, E.; Borgia, M.G.; Premuda, M.

    1997-03-01

    The Montecarlo code PREMAR is described, which allows the user to simulate the radiation transport in the atmosphere, in the ultraviolet-infrared frequency interval. A plan multilayer geometry is at present foreseen by the code, witch albedo possibility at the lower boundary surface. For a given monochromatic point source, the main quantities computed by the code are the absorption spatial distributions of aerosol and molecules, together with the related atmospheric transmittances. Moreover, simulation of of Lidar experiments are foreseen by the code, the source and telescope fields of view being assigned. To build-up the appropriate probability distributions, an input data library is assumed to be read by the code. For this purpose the radiance-transmittance LOWTRAN-7 code has been conveniently adapted as a source of the library so as to exploit the richness of information of the code for a large variety of atmospheric simulations. Results of applications of the PREMAR code are finally presented, with special reference to simulations of Lidar system and radiometer experiments carried out at the Brasimone ENEA Centre by the Environment Department

  20. A vectorized Monte Carlo code for modeling photon transport in SPECT

    International Nuclear Information System (INIS)

    Smith, M.F.; Floyd, C.E. Jr.; Jaszczak, R.J.

    1993-01-01

    A vectorized Monte Carlo computer code has been developed for modeling photon transport in single photon emission computed tomography (SPECT). The code models photon transport in a uniform attenuating region and photon detection by a gamma camera. It is adapted from a history-based Monte Carlo code in which photon history data are stored in scalar variables and photon histories are computed sequentially. The vectorized code is written in FORTRAN77 and uses an event-based algorithm in which photon history data are stored in arrays and photon history computations are performed within DO loops. The indices of the DO loops range over the number of photon histories, and these loops may take advantage of the vector processing unit of our Stellar GS1000 computer for pipelined computations. Without the use of the vector processor the event-based code is faster than the history-based code because of numerical optimization performed during conversion to the event-based algorithm. When only the detection of unscattered photons is modeled, the event-based code executes 5.1 times faster with the use of the vector processor than without; when the detection of scattered and unscattered photons is modeled the speed increase is a factor of 2.9. Vectorization is a valuable way to increase the performance of Monte Carlo code for modeling photon transport in SPECT