WorldWideScience

Sample records for astec code adaptability

  1. Analysis of ASTEC code adaptability to severe accident simulation for CANDU type reactors

    Constantin, Marin; Rizoiu, Andrei

    2008-01-01

    In order to prepare the adaptation of the ASTEC code to CANDU NPP severe accident analysis two kinds of activities were performed: - analyses of the ASTEC modules from the point of view of models and options, followed by CANDU exploratory calculation for the appropriate modules/models; - preparing the specifications for ASTEC adaptation for CANDU NPP. The paper is structured in three parts: - a comparison of PWR and CANDU concepts (from the point of view of severe accident phenomena); - exploratory calculations with some ASTEC modules- SOPHAEROS, CPA, IODE, CESAR, DIVA - for CANDU type reactors specific problems; - development needs analysis - algorithms, methods, modules. (authors)

  2. Synthesis of the ASTEC integral code activities in SARNET – Focus on ASTEC V2 plant applications

    Chatelard, P.; Reinke, N.; Ezzidi, A.; Lombard, V.; Barnak, M.; Lajtha, G.; Slaby, J.; Constantin, M.; Majumdar, P.

    2014-01-01

    Highlights: • Independent assessment of the ASTEC severe accident code vs. experiments is summarised. • Main remaining modelling issues and development perspectives are identified. • Independent assessment of ASTEC code at full scale conditions is described. • Main requirements to address BWR and PHWR types of reactors are identified. - Abstract: Among the 43 organisations which joined the SARNET2 FP7 project from 2009 to 2013, 31 have been involved in the activities on the ASTEC code. This paper presents a synthesis of the main achievements that have been obtained on the ASTEC V2 integral code, jointly developed by IRSN (France) and GRS (Germany), on development, validation vs. experimental data and applications at full scale conditions for both Gen.II and Gen.III plants. As to code development, while the current V2.0 series of ASTEC versions was continuously improved (elaboration and release by IRSN and GRS of three successive V2.0 revisions), IRSN and GRS have also intensively continued in parallel the elaboration of the second ASTEC V2 major version (version V2.1) to be delivered end of 2014. Regarding code validation vs. experiments, the partners have assessed the V2.0 version and subsequent revisions vs. more than 50 experiments; this extended assessment notably confirmed that most models are today close to the State of the Art, while it also corroborated the yet known key-topics on which modelling efforts should focus in priority. As to plant applications, the comparison of ASTEC results with other codes allows concluding on a globally good agreement for in-vessel and ex-vessel severe accident progression. As to ASTEC adaptations to BWR and PHWR, significant achievements have been obtained through the elaboration and integration in the future V2.1 version of dedicated core degradation models, notably to account for multi coolant flows

  3. Status of the ASTEC integral code

    Van Dorsselaere, J.P.; Jacq, F.; Allelein, H.J.

    2000-01-01

    The ASTEC (Accident Source Term Evaluation Code) integrated code is developed since 1997 in close collaboration by IPSN and GRS to predict an entire LWR severe accident sequence from the initiating event up to Fission Product (FP) release out of the containment. The applications of such a code are source term determination studies, scenario evaluations, accident management studies and Probabilistic Safety Assessment level 2 (PSA-2) studies. The version V0 of ASTEC is based on the RCS modules of the ESCADRE integrated code (IPSN) and on the upgraded RALOC and FIPLOC codes (GRS) for containment thermalhydraulics and aerosol behaviour. The latest version V0.2 includes the general feed-back from the overall validation performed in 1998 (25 separate-effect experiments, PHEBUS.FP FPT1 integrated experiment), some modelling improvements (i.e. silver-iodine reactions in the containment sump), and the implementation of the main safety systems for Severe Accident Management. Several reactor-applications are under way on French and German PWR, and on VVER-1000, all with a multi-compartment configuration of the containment. The total IPSN-GRS manpower involved in ASTEC project is today about 20 men/year. The main evolution of the next version V1, foreseen end of 2001, concerns the integration of the front-end phase and the improvement of the in-vessel degradation late-phase modelling. (author)

  4. Containment Modelling with the ASTEC Code

    Sadek, Sinisa; Grgic, Davor

    2014-01-01

    ASTEC is an integral computer code jointly developed by Institut de Radioprotection et de Surete Nucleaire (IRSN, France) and Gesellschaft fur Anlagen-und Reaktorsicherheit (GRS, Germany) to assess the nuclear power plant behaviour during a severe accident (SA). It consists of 13 coupled modules which compute various SA phenomena in primary and secondary circuits of the nuclear power plants (NPP), and in the containment. The ASTEC code was used to model and to simulate NPP behaviour during a postulated station blackout accident in the NPP Krsko, a two-loop pressurized water reactor (PWR) plant. The primary system of the plant was modelled with 110 thermal hydraulic (TH) volumes, 113 junctions and 128 heat structures. The secondary system was modelled with 76 TH volumes, 77 junctions and 87 heat structures. The containment was modelled with 10 TH volumes by taking into account containment representation as a set of distinctive compartments, connected with 23 junctions. A total of 79 heat structures were used to simulate outer containment walls and internal steel and concrete structures. Prior to the transient calculation, a steady state analysis was performed. In order to achieve correct plant initial conditions, the operation of regulation systems was modelled. Parameters which were subjected to regulation were the pressurizer pressure, the pressurizer narrow range level and steam mass flow rates in the steam lines. The accident analysis was focused on containment behaviour, however the complete integral NPP analysis was carried out in order to provide correct boundary conditions for the containment calculation. During the accident, the containment integrity was challenged by release of reactor system coolant through degraded coolant pump seals and, later in the accident following release of the corium out of the reactor pressure vessel, by the molten corium concrete interaction and direct containment heating mechanisms. Impact of those processes on relevant

  5. European Validation of the Integral Code ASTEC (EVITA)

    Allelein, H.-J.; Neu, K.; Dorsselaere, J.P. Van

    2005-01-01

    The main objective of the European Validation of the Integral Code ASTEC (EVITA) project is to distribute the severe accident integral code ASTEC to European partners in order to apply the validation strategy issued from the VASA project (4th EC FWP). Partners evaluate the code capability through validation on reference experiments and plant applications accounting for severe accident management measures, and compare results with reference codes. The basis version V0 of ASTEC (Accident Source Term Evaluation Code)-commonly developed and basically validated by GRS and IRSN-was made available in late 2000 for the EVITA partners on their individual platforms. Users' training was performed by IRSN and GRS. The code portability on different computers was checked to be correct. A 'hot line' assistance was installed continuously available for EVITA code users. The actual version V1 has been released to the EVITA partners end of June 2002. It allows to simulate the front-end phase by two new modules:- for reactor coolant system 2-phase simplified thermal hydraulics (5-equation approach) during both front-end and core degradation phases; - for core degradation, based on structure and main models of ICARE2 (IRSN) reference mechanistic code for core degradation and on other simplified models. Next priorities are clearly identified: code consolidation in order to increase the robustness, extension of all plant applications beyond the vessel lower head failure and coupling with fission product modules, and continuous improvements of users' tools. As EVITA has very successfully made the first step into the intention to provide end-users (like utilities, vendors and licensing authorities) with a well validated European integral code for the simulation of severe accidents in NPPs, the EVITA partners strongly recommend to continue validation, benchmarking and application of ASTEC. This work will continue in Severe Accident Research Network (SARNET) in the 6th Framework Programme

  6. ICARE/CATHARE and ASTEC code development trends

    Chatelard, P.; Dorsselaere, J.-P. van

    2000-01-01

    Regarding the computer code development for simulation of LWR severe accidents, IPSN developed a two-tier approach based on detailed codes such as ICARE/CATHARE and simplified models to be assembled in the ASTEC integral code. The ICARE/CATHARE code results from the coupling between the ICARE2 code modelling the core degradation phenomena and the thermalhydraulics code CATHARE2. It allows to calculate PWR and VVER severe accident sequences in the whole RCS. The modelling of the early degradation phase can be considered as rather complete in the ICARE/CATHARE V1 mod1 version (to be released by mid-2000) whereas some models are still missing for the late phase. The main future developments (ICARE/CATHARE V2) will concern the multi-dimensional thermalhydraulics, the quenching of partially damaged cores (mechanical and chemical effects), the debris bed two-phase thermalhydraulics (including reflooding) and the corium behaviour in the lower head. The main other physical improvements should concern the behaviour of boron carbide control rods, the processes governing the core loss of geometry (transition phase) and the oxidation of relocated melts. The ASTEC (Accident Source Term Evaluation Code) integral code, commonly developed by IPSN and GRS, aims to predict an entire LWR (PWR, VVER and BWR) severe accident sequence from the initiating event through to FP release out of the containment, for source term, PSA level 2, or accident management studies. The version ASTEC VO.3 to be released by mid-2000 can be considered now as robust and fast-running enough (between 2 and 12 hours for a one day accident) and allows to perform, with a containment multi-compartment configuration, any scenario accident study accounting for the main safety systems and operator procedures (spray, recombiner, etc.). The next version ASTEC V1, to be released beginning of 2002, will include the frontend simulation and improve modelling of in-vessel core degradation. A large validation activity will

  7. On boundary layer modelling using the ASTEC code

    Smith, B.L.

    1991-07-01

    The modelling of fluid boundary layers adjacent to non-slip, heated surface using the ASTEC code is described. The pricipal boundary layer characteristics are derived using simple dimensional arguments and these are developed into criteria for optimum placement of the computational mesh to achieve realistic simulation. In particular, the need for externally-imposed drag and heat transfer correlations as a function of the local mesh concentration is discussed in the context of both laminar and turbulent flow conditions. Special emphasis is placed in the latter case on the (k-ε) turbulence model, which is standard in the code. As far as possible, the analyses are pursued from first principles, so that no comprehensive knowledge of the history of the subject is required for the general ASTEC user to derive practical advice from the document. Some attention is paid to the use of heat transfer correlations for internal solid/fluid surfaces, whose treatment is not straightforward in ASTEC. It is shown that three formulations are possible to effect the heat transfer, called Explicit, Jacobian and Implicit. The particular advantages and disadvantages of each are discussed with regard to numerical stability and computational efficiency. (author) 18 figs., 1 tab., 39 refs

  8. Contributions to the validation of the ASTEC V1 code

    Constantin, Marin; Rizoiu, Andrei; Turcu, Ilie

    2004-01-01

    In the frame of PHEBEN2 project (Validation of the severe accidents codes for applications to nuclear power plants, based on the PHEBUS FP experiments), a project developed within the EU research Frame Program 5 (FP5), the INR-Pitesti's team has received the task of determining the ASTEC code sensitivity. The PHEBEN2 project has been initiated in 1998 and gathered 13 partners from 6 EU member states. To the project 4 partners from 3 candidate states (Hungary, Bulgaria and Romania) joined later. The works were contracted with the European Commission (under FIKS-CT1999-00009 contract) that supports financially the research effort up to about 50%. According to the contract provisions, INR's team participated in developing the Working Package 1 (WP1) which refers to validation of the integral computation codes that use the PHOEBUS experimental data and the Working Package 3 (WP3) referring to the evaluation of the codes to be applied in nuclear power plants for risk evaluation, nuclear safety margin evaluation and determination/evaluation of the measures to be adopted in case of severe accident. The present work continues the efforts to validate preliminarily the ASTEC code. Focused are the the stand-alone sensitivity analyses applied to two most important modules of the code, namely DIVA and SOPHAEROS

  9. Status of emergency spray modelling in the integral code ASTEC

    Plumecocq, W.; Passalacqua, R.

    2001-01-01

    Containment spray systems are emergency systems that would be used in very low probability events which may lead to severe accidents in Light Water Reactors. In most cases, the primary function of the spray would be to remove heat and condense steam in order to reduce pressure and temperature in the containment building. Spray would also wash out fission products (aerosols and gaseous species) from the containment atmosphere. The efficiency of the spray system in the containment depressurization as well as in the removal of aerosols, during a severe accident, depends on the evolution of the spray droplet size distribution with the height in the containment, due to kinetic and thermal relaxation, gravitational agglomeration and mass transfer with the gas. A model has been developed taking into account all of these phenomena. This model has been implemented in the ASTEC code with a validation of the droplets relaxation against the CARAIDAS experiment (IPSN). Applications of this modelling to a PWR 900, during a severe accident, with special emphasis on the effect of spray on containment hydrogen distribution have been performed in multi-compartment configuration with the ASTEC V0.3 code. (author)

  10. Applications of ASTEC integral code on a generic CANDU 6

    Radu, Gabriela, E-mail: gabriela.radu@nuclear.ro [Institute for Nuclear Research, Campului 1, 115400 Mioveni, Arges (Romania); Prisecaru, Ilie [Power Engineering Department, University “Politehnica” of Bucharest, 313 Splaiul Independentei, Bucharest (Romania)

    2015-05-15

    Highlights: • Short overview of the models included in the ASTEC MCCI module. • MEDICIS/CPA coupled calculations for a generic CANDU6 reactor. • Two cases taking into account different pool/concrete interface models. - Abstract: In case of a hypothetical severe accident in a nuclear power plant, the corium consisting of the molten reactor core and internal structures may flow onto the concrete floor of containment building. This would cause an interaction between the molten corium and the concrete (MCCI), in which the heat transfer from the hot melt to the concrete would cause the decomposition and the ablation of the concrete. The potential hazard of this interaction is the loss of integrity of the containment building and the release of fission products into the environment due to the possibility of a concrete foundation melt-through or containment over-pressurization by the gases produced from the decomposition of the concrete or by the inflammation of combustible gases. In the safety assessment of nuclear power plants, it is necessary to know the consequences of such a phenomenon. The paper presents an example of application of the ASTECv2 code to a generic CANDU6 reactor. This concerns the thermal-hydraulic behaviour of the containment during molten core–concrete interaction in the reactor vault. The calculations were carried out with the help of the MEDICIS MCCI module and the CPA containment module of ASTEC code coupled through a specific prediction–correction method, which consists in describing the heat exchanges with the vault walls and partially absorbent gases. Moreover, the heat conduction inside the vault walls is described. Two cases are presented in this paper taking into account two different heat transfer models at the pool/concrete interface and siliceous concrete. The corium pool configuration corresponds to a homogeneous configuration with a detailed description of the upper crust.

  11. Radioactive releases of nuclear power plants: the code ASTEC

    Sdouz, G.; Pachole, M.

    1999-11-01

    In order to adopt potential countermeasures to protect the population during the course of an accident in a nuclear power plant a fast prediction of the radiation exposure is necessary. The basic input value for such a dispersion calculation is the source term, which is the description of the physical and chemical behavior of the released radioactive nuclides. Based on a source term data base a pilot system has been developed to determine a relevant source term and to generate the input file for the dispersion code TAMOS of the Zentralanstalt fuer Meteorologie und Geodynamik (ZAMG). This file can be sent directly as an attachment of e-mail to the TAMOS user for further processing. The source terms for 56 European nuclear power plant units are included in the pilot version of the code ASTEC (Austrian Source Term Estimation Code). The use of the system is demonstrated in an example based on an accident in the unit TEMELIN-1. In order to calculate typical core inventories for the data bank the international computer code OBIGEN 2.1 was installed and applied. The report has been completed with a discussion on the optimal data transfer. (author)

  12. ASTEC V2. Overview of code development and application at GRS

    Reinke, N.; Nowack, H.; Sonnenkalb, M.

    2011-01-01

    The integral code ASTEC (Accident Source Term Evaluation Code) commonly developed since 1996 by the French IRSN and the German GRS is a fast running programme, which allows the calculation of entire sequences of severe accidents (SA) in light water reactors from the initiating event up to the release of fission products into the environment, thereby covering all important in-vessel and containment phenomena. Thus, the main ASTEC application fields are intended to be accident sequence studies, uncertainty and sensitivity studies, probabilistic safety analysis level 2 as well as support to experiments. The modular structure of ASTEC allows running each module independently and separately, e.g. for separate effects analyses as well as a combination of multiple modules for coupled effects testing and integral analyses. Subject of this paper is an overview of the new V2 series of the ASTEC code system and presentation of exemplary results for the application to severe accidents sequences at PWRs. (orig.)

  13. VVER 1000 SBO calculations with pressuriser relief valve stuck open with ASTEC computer code

    Atanasova, B.P.; Stefanova, A.E.; Groudev, P.P.

    2012-01-01

    Highlights: ► We modelled the ASTEC input file for accident scenario (SBO) and focused analyses on the behaviour of core degradation. ► We assumed opening and stuck-open of pressurizer relief valve during performance of SBO scenario. ► ASTEC v1.3.2 has been used as a reference code for the comparison study with the new version of ASTEC code. - Abstract: The objective of this paper is to present the results obtained from performing the calculations with ASTEC computer code for the Source Term evaluation for specific severe accident transient. The calculations have been performed with the new version of ASTEC. The ASTEC V2 code version is released by the French IRSN (Institut de Radioprotection at de surete nucleaire) and Gesellschaft für Anlagen-und Reaktorsicherheit (GRS), Germany. This investigation has been performed in the framework of the SARNET2 project (under the Euratom 7th framework program) by Institute for Nuclear Research and Nuclear Energy – Bulgarian Academy of Science (INRNE-BAS).

  14. Development and assessment of ASTEC code for severe accident simulation

    Van Dorsselaere, J.P.; Pignet, S.; Seropian, C.; Montanelli, T.; Giordano, P.; Jacq, F.; Schwinges, B.

    2005-01-01

    Full text of publication follows: The ASTEC integral code, jointly developed by IRSN and GRS since several years for evaluation of source term during a severe accident (SA) in a Light Water Reactor, will play a central role in the SARNET network of excellence of the 6. Framework Programme (FwP) of the European Commission which started in spring 2004. It should become the reference European SA integral code in the next years. The version V1.1, released in June 2004, allows to model most of the main physical phenomena (except steam explosion) near or at the state of the art. In order to allow to study a great number of scenarios, a compromise must be found between precision of results and calculation time: one day of accident time usually takes less than one day of real time to be simulated on a PC computer. Important efforts are being made on validation by covering more than 30 reference experiments, often International Standard Problems from OECD (CORA, LOFT, PACTEL, BETA, VANAM, ACE-RTF, Phebus.FPT1...). The code is also used for the detailed interpretation of all the integral Phebus.FP experiments. Eighteen European partners performed a first independent evaluation of the code capabilities in 2000-03 within the frame of the EVITA 5. FwP project on one hand by comparison to experiments and on another hand by benchmarking with MAAP4 and MELCOR integral codes on plant applications on PWR and VVER. Their main conclusions were the needs of improvement of code robustness (especially the 2 new modules CESAR and DIVA simulating respectively circuit thermal hydraulics and core degradation) and of post-processing tools. Some improvements have already been achieved in the latest version V 1.1 on these two aspects. A new module MEDICIS devoted to Molten Core Concrete Interaction (MCCI) is implemented in this version, with a tight coupling to the containment thermal hydraulics module CPA. The paper presents a detailed analysis of a TMLB sequence on a French 900 MWe PWR, from

  15. An algorithm for solving thermalhydraulic equations in complex geometries: the Astec code

    Lonsdale, R.D.

    1987-01-01

    By applying a finite volume approach to a finite element mesh, the ASTEC computer code allows three-dimensional incompressible fluid flow and heat transfer in complex geometries to be simulated realistically, without making excessive demands on computing resources. The methods used in the code are described, and examples of the application of the code are presented

  16. Evolution of ASTEC V1.2 rev.1 code for WWER-1000 reactors/SBO sequence

    Georgieva, J.; Stefanova, A.; Groudev, P.; Tusheva, P.; Kalchev, B.; Passalacqua, R.

    2006-01-01

    In this paper a comparison between calculations of severe accidents occurred from WWER-1000 with ASTEC code specified for an event of full unloading with relief valves stuck opened with no hydroaccumulators intervention is presented. The purpose of the analyses provided is to present the relationship between the improvements of the actual version (ASTEC Vl.2 rev. 1) and ASTEC V1.1 p2 like: code modifications, incoming data improvements. Such discrepancies are to be examined. Case by case suggestions for ASTEC improvements are to be provided

  17. Analysis of SCARABEE BE+3 experiment with ASTEC-Na and comparison with other SFR safety analysis codes

    Bandini, Giacomino; Ederli, Stefano; Perez-Martin, Sara; Pfrang, Werner; Girault, Nathalie; Cloarec, Laure

    2017-01-01

    The ASTEC-Na code was further developed and assessed in the frame of JASMIN project of the 7th EU Framework Program to extend the original capability of ASTEC, dealing with severe accident analysis in LWR to Sodium-cooled Fast Reactors (SFR). The in-pile BE+3 experiment from the SCARABEE-N program has been simulated with ASTEC-Na for thermal-hydraulic models validation purpose. The adequacy of ASTEC-Na thermal-hydraulic models has been also investigated through the comparison with other safety analysis codes. The analysis of SCARABEE BE+3 test confirms the good performance of ASTEC-Na code in the calculation of single-phase conditions and boiling onset, while larger deviations are encountered in the analysis of the two-phase conditions, mainly regarding the propagation of the boiling front. Furthermore, reasonable agreement was found with other code results. (author)

  18. The European source-term evaluation code ASTEC: status and applications, including CANDU plant applications

    Van Dorsselaere, J.P.; Giordano, P.; Kissane, M.P.; Montanelli, T.; Schwinges, B.; Ganju, S.; Dickson, L.

    2004-01-01

    Research on light-water reactor severe accidents (SA) is still required in a limited number of areas in order to confirm accident-management plans. Thus, 49 European organizations have linked their SA research in a durable way through SARNET (Severe Accident Research and management NETwork), part of the European 6th Framework Programme. One goal of SARNET is to consolidate the integral code ASTEC (Accident Source Term Evaluation Code, developed by IRSN and GRS) as the European reference tool for safety studies; SARNET efforts include extending the application scope to reactor types other than PWR (including VVER) such as BWR and CANDU. ASTEC is used in IRSN's Probabilistic Safety Analysis level 2 of 900 MWe French PWRs. An earlier version of ASTEC's SOPHAEROS module, including improvements by AECL, is being validated as the Canadian Industry Standard Toolset code for FP-transport analysis in the CANDU Heat Transport System. Work with ASTEC has also been performed by Bhabha Atomic Research Centre, Mumbai, on IPHWR containment thermal hydraulics. (author)

  19. Thermal-hydraulic and aerosol containment phenomena modelling in ASTEC severe accident computer code

    Kljenak, Ivo; Dapper, Maik; Dienstbier, Jiri; Herranz, Luis E.; Koch, Marco K.; Fontanet, Joan

    2010-01-01

    Transients in containment systems of different scales (Phebus.FP containment, KAEVER vessel, Battelle Model Containment, LACE vessel and VVER-1000 nuclear power plant containment) involving thermal-hydraulic phenomena and aerosol behaviour, were simulated with the computer integral code ASTEC. The results of the simulations in the first four facilities were compared with experimental results, whereas the results of the simulated accident in the VVER-1000 containment were compared to results obtained with the MELCOR code. The main purpose of the simulations was the validation of the CPA module of the ASTEC code. The calculated results support the applicability of the code for predicting in-containment thermal-hydraulic and aerosol phenomena during a severe accident in a nuclear power plant.

  20. Simulation of hydrogen deflagration experiments in the ENACCEF facility using ASTEC code

    Povilaitis, Mantas; Urbonavicius, Egidijus; Rimkevicius, Sigitas

    2011-01-01

    During a hypothetic severe accident in the NPP involving degradation of the core of a light water reactor, hydrogen could be generated and released into the containment atmosphere posing a deflagration or even a detonation risk. In the case of deflagration, the integrity of the containment would be threatened by the increase of the containment atmosphere pressure and temperature. Other risks of containment damage due to turbulent flames exist, caused by high pressure pulses, shock waves and etc. For the simulation of such processes a reliable numerical codes are needed. Despite flame acceleration being largely studied for homogeneous hydrogen - air mixtures, there are still unresolved issues in this research area, e.g., the effect of turbulence level on flame acceleration and quenching. This paper presents simulations of hydrogen deflagration experiments in the ENACCEF facility using ASTEC code, performed in the frames of International Standard Program No. 49 and SARNET2 project. Experiments and simulations were performed with the aim of evaluating the codes' (a number of participants with various codes participated in the project) capabilities to simulate hydrogen combustion. ASTEC code is an integral lumped-parameter approach based nuclear safety analysis code. For the presented simulations, ASTEC modules CPA (containment thermohydromechanics) and FRONT (hydrogen deflagration) were used. Paper present ENACCEF test facility, its nodalisation schemes developed for the calculations, simulated experiments and simulations' results. Brief description of FRONT module is also presented. Calculations' results are compared with experimental results and analyzed. (author)

  1. ASTEC code development, validation and applications for severe accident management within the CESAM European project - 15392

    Van Dorsselaere, J.P.; Chatelard, P.; Chevalier-Jabet, K.; Nowack, H.; Herranz, L.E.; Pascal, G.; Sanchez-Espinoza, V.H.

    2015-01-01

    ASTEC, jointly developed by IRSN and GRS, is considered as the European reference code since it capitalizes knowledge from the European research on the domain. The CESAM project aims at its enhancement and extension for use in severe accident management (SAM) analysis of the nuclear power plants (NPP) of Generation II-III presently under operation or foreseen in near future in Europe, spent fuel pools included. Within the CESAM project 3 main types of research activities are performed: -) further validation of ASTEC models important for SAM, in particular for the phenomena being of importance in the Fukushima-Daichi accidents, such as reflooding of degraded cores, pool scrubbing, hydrogen combustion, or spent fuel pools behaviour; -) modelling improvements, especially for BWR or based on the feedback of validation tasks; and -) ASTEC applications to severe accident scenarios in European NPPs in order to assess prevention and mitigation measures. An important step will be reached with the next major ASTEC V2.1 version planned to be delivered in the first part of 2015. Its main improvements will concern the possibility to simulate in details the core degradation of BWR and PHWR and a model of reflooding of severely degraded cores. A new user-friendly Graphic User Interface will be available for plant analyses

  2. Fission product release from nuclear fuel I. Physical modelling in the ASTEC code

    Brillant, G.; Marchetto, C.; Plumecocq, W.

    2013-01-01

    Highlights: • Physical modeling of FP and SM release in ASTEC is presented. • The release is described as solid state diffusion within fuel for high volatile FP. • The release is described as FP vaporisation for semi volatile FP. • The release is described as fuel vaporisation for low volatile FP. • ASTEC validation is presented in the second paper. - Abstract: This article is the first of a series of two articles dedicated to the mechanisms of fission product release from a degraded core as they are modelled in the ASTEC code. The ASTEC code aims at simulating severe accidents in nuclear reactors from the initiating event up to the radiological consequences on the environment. This code is used for several applications such as nuclear plant safety evaluation including probabilistic studies and emergency preparedness. To cope with the requirements of robustness and low calculation time, the code is based on a semi-empirical approach and only the main limiting phenomena that govern the release from intact rods and from debris beds are considered. For solid fuel, fission products are classified into three groups, depending on their degree of volatility. The kinetics of volatile fission products release depend on the rate-limiting process of solid-state diffusion through fuel grains. For semi-volatile fission products, the release from the open fuel porosities is assumed to be governed by vaporisation and mass transfer processes. The key phenomenon for the release of low volatile fission products is supposed to be fuel volatilisation. A similar approach is used for the release of fission products from a rubble bed. An in-depth validation of the code including both analytical and integral experiments is the subject of the second article

  3. Modelling and description of PHEBUS FPT1 experiment with the computer code ASTEC

    Tusheva, P.; Kalchev, B.

    2005-01-01

    The PHEBUS Fission Product (FP) programme was initiated in 1988 after major severe reactor accidents (at Three Mile Island and Chernobyl). The main objective of the programme is to study the release, transport and retention of fission products in an in-pile facility under severe accident conditions in a LWR. This paper covers the FPT 1 experiment description and modelling by the ASTEC code. The main calculated events, the temperature evolution at the middle part of the test bundle, the state of the bundle degradation at the end of calculation and the calculated Hydrogen production are presented and discussed. The obtained results from the ASTEC code calculation show a good agreement between experimental and calculated results. The calculated Hydrogen production is slightly overestimated in comparison with the experimental results

  4. Severe damage analysis of VVER 1000 following large break LOCA using Astec code

    Chatterjee, B.; Mukhopadhyay, D.; Lele, H.G.; Ghosh, A.K.; Kushwaha, H.S.

    2007-01-01

    Severe accident analysis of a reactor is an important aspect in the evaluation of source term. This in turn helps in emergency planning. An analysis has been carried out for VVER-1000 (V320) reactor following Large Break LOCA (loss of coolant accident) along with Station Blackout (SBO). Computer code ASTEC (jointly developed by IRSN, France, and GRS, Germany) is used for analyzing the transient. This integral code has been designed to be used as reference code for PSA2 studies. Severe accident analysis is carried out for an accident initiated by Large break LOCA along with SBO. Two cases have been analysed with the version ASTEC V1.2-rev1. In the first case hydro-accumulators are considered not available while the second case has been analysed with hydro accumulators. In this paper, ASTEC predictions have been studied for the in-vessel phase of the accident till vessel failure. The vessel failure was observed at 6979 s when accumulators were assumed not available. The vessel failure was quite delayed (19294 s) with operating accumulators. The hydrogen production was found to be very large (22% of total Zr inventory) in the case with accumulators compared to the case without accumulators (1.5% of total Zr inventory)

  5. Severe accident analysis in a two-loop PWR nuclear power plant with the ASTEC code

    Sadek, Sinisa; Amizic, Milan; Grgic, Davor

    2013-01-01

    The ASTEC/V2.0 computer code was used to simulate a hypothetical severe accident sequence in the nuclear power plant Krsko, a 2-loop pressurized water reactor (PWR) plant. ASTEC is an integral code jointly developed by Institut de Radioprotection et de Surete Nucleaire (IRSN, France) and Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, Germany) to assess nuclear power plant behaviour during a severe accident. The analysis was conducted in 2 steps. First, the steady state calculation was performed in order to confirm the applicability of the plant model and to obtain correct initial conditions for the accident analysis. The second step was the calculation of the station blackout accident with a leakage of the primary coolant through degraded reactor coolant pump seals, which was a small LOCA without makeup capability. Two scenarios were analyzed: one with and one without the auxiliary feedwater (AFW). The latter scenario, without the AFW, resulted in earlier core damage. In both cases, the accident ended with a core melt and a reactor pressure vessel failure with significant release of hydrogen. In addition, results of the ASTEC calculation were compared with results of the RELAP5/SCDAPSIM calculation for the same transient scenario. The results comparison showed a good agreement between predictions of those 2 codes. (orig.)

  6. Validation of Code ASTEC with LIVE-L1 Experimental Results

    Bachrata, Andrea

    2008-01-01

    The severe accidents with core melting are considered at the design stage of project at Generation 3+ of Nuclear Power Plants (NPP). Moreover, there is an effort to apply the severe accident management to the operated NPP. The one of main goals of severe accidents mitigation is corium localization and stabilization. The two strategies that fulfil this requirement are: the in-vessel retention (e.g. AP-600, AP- 1000) and the ex-vessel retention (e.g. EPR). To study the scenario of in-vessel retention, a large experimental program and the integrated codes have been developed. The LIVE-L1 experimental facility studied the formation of melt pools and the melt accumulation in the lower head using different cooling conditions. Nowadays, a new European computer code ASTEC is being developed jointly in France and Germany. One of the important steps in ASTEC development in the area of in-vessel retention of corium is its validation with LIVE-L1 experimental results. Details of the experiment are reported. Results of the ASTEC (module DIVA) application to the analysis of the test are presented. (author)

  7. ASTEC V2 severe accident integral code: Fission product modelling and validation

    Cantrel, L.; Cousin, F.; Bosland, L.; Chevalier-Jabet, K.; Marchetto, C.

    2014-01-01

    One main goal of the severe accident integral code ASTEC V2, jointly developed since almost more than 15 years by IRSN and GRS, is to simulate the overall behaviour of fission products (FP) in a damaged nuclear facility. ASTEC applications are source term determinations, level 2 Probabilistic Safety Assessment (PSA2) studies including the determination of uncertainties, accident management studies and physical analyses of FP experiments to improve the understanding of the phenomenology. ASTEC is a modular code and models of a part of the phenomenology are implemented in each module: the release of FPs and structural materials from degraded fuel in the ELSA module; the transport through the reactor coolant system approximated as a sequence of control volumes in the SOPHAEROS module; and the radiochemistry inside the containment nuclear building in the IODE module. Three other modules, CPA, ISODOP and DOSE, allow respectively computing the deposition rate of aerosols inside the containment, the activities of the isotopes as a function of time, and the gaseous dose rate which is needed to model radiochemistry in the gaseous phase. In ELSA, release models are semi-mechanistic and have been validated for a wide range of experimental data, and noticeably for VERCORS experiments. For SOPHAEROS, the models can be divided into two parts: vapour phase phenomena and aerosol phase phenomena. For IODE, iodine and ruthenium chemistry are modelled based on a semi-mechanistic approach, these FPs can form some volatile species and are particularly important in terms of potential radiological consequences. The models in these 3 modules are based on a wide experimental database, resulting for a large part from international programmes, and they are considered at the state of the art of the R and D knowledge. This paper illustrates some FPs modelling capabilities of ASTEC and computed values are compared to some experimental results, which are parts of the validation matrix

  8. Aerosol sampling and Transport Efficiency Calculation (ASTEC) and application to surtsey/DCH aerosol sampling system: Code version 1.0: Code description and user's manual

    Yamano, N.; Brockmann, J.E.

    1989-05-01

    This report describes the features and use of the Aerosol Sampling and Transport Efficiency Calculation (ASTEC) Code. The ASTEC code has been developed to assess aerosol transport efficiency source term experiments at Sandia National Laboratories. This code also has broad application for aerosol sampling and transport efficiency calculations in general as well as for aerosol transport considerations in nuclear reactor safety issues. 32 refs., 31 figs., 7 tabs

  9. ASTEC V2 severe accident integral code main features, current V2.0 modelling status, perspectives

    Chatelard, P.; Reinke, N.; Arndt, S.; Belon, S.; Cantrel, L.; Carenini, L.; Chevalier-Jabet, K.; Cousin, F.; Eckel, J.; Jacq, F.; Marchetto, C.; Mun, C.; Piar, L.

    2014-01-01

    The severe accident integral code ASTEC, jointly developed since almost 20 years by IRSN and GRS, simulates the behaviour of a whole nuclear power plant under severe accident conditions, including severe accident management by engineering systems and procedures. Since 2004, the ASTEC code is progressively becoming the reference European severe accident integral code through in particular the intensification of research activities carried out in the frame of the SARNET European network of excellence. The first version of the new series ASTEC V2 was released in 2009 to about 30 organizations worldwide and in particular to SARNET partners. With respect to the previous V1 series, this new V2 series includes advanced core degradation models (issued from the ICARE2 IRSN mechanistic code) and necessary extensions to be applicable to Gen. III reactor designs, notably a description of the core catcher component to simulate severe accidents transients applied to the EPR reactor. Besides these two key-evolutions, most of the other physical modules have also been improved and ASTEC V2 is now coupled to the SUNSET statistical tool to make easier the uncertainty and sensitivity analyses. The ASTEC models are today at the state of the art (in particular fission product models with respect to source term evaluation), except for quenching of a severely damage core. Beyond the need to develop an adequate model for the reflooding of a degraded core, the main other mean-term objectives are to further progress on the on-going extension of the scope of application to BWR and CANDU reactors, to spent fuel pool accidents as well as to accidents in both the ITER Fusion facility and Gen. IV reactors (in priority on sodium-cooled fast reactors) while making ASTEC evolving towards a severe accident simulator constitutes the main long-term objective. This paper presents the status of the ASTEC V2 versions, focussing on the description of V2.0 models for water-cooled nuclear plants

  10. Modeling of severe accident sequences with the new modules CESAR and DIVA of ASTEC system code

    Pignet, Sophie; Guillard, Gaetan; Barre, Francois; Repetto, Georges

    2003-01-01

    Systems of computer codes, so-called 'integral' codes, are being developed to simulate the scenario of a hypothetical severe accident in a light water reactor, from the initial event until the possible radiological release of fission products out of the containment. They couple the predominant physical phenomena that occur in the different reactor zones and simulate the actuation of safety systems by procedures and by operators. In order to allow to study a great number of scenarios, a compromise must be found between precision of results and calculation time: one day of accident time should take less than one day of real time to simulate on a PC computer. This search of compromise is a real challenge for such integral codes. The development of the ASTEC integral code was initiated jointly by IRSN and GRS as an international reference code. The latest version 1.0 of ASTEC, including the new modules CESAR and DIVA which model the behaviour of the reactor cooling system and the core degradation, is presented here. Validation of the modules and one plant application are described

  11. Modelling of QUENCH-03 and QUENCH-06 Experiments Using RELAP/SCDAPSIM and ASTEC Codes

    Tadas Kaliatka

    2014-01-01

    Full Text Available To prevent total meltdown of the uncovered and overheated core, the reflooding with water is a necessary accident management measure. Because these actions lead to the generation of hydrogen, which can cause further problems, the related phenomena are investigated performing experiments and computer simulations. In this paper, for the experiments of loss of coolant accidents, performed in Forschungszentrum Karlsruhe, QUENCH-03 and QUENCH-06 are modelled using RELAP5/SCDAPSIM and ASTEC codes. The performed benchmark allowed analysing different modelling features. The recommendations for the model development are presented.

  12. Simulation of the fuel rod bundle test QUENCH-03 using the system codes ASTEC and ATHLET-CD

    Kruse, P.; Koch, M.K.

    2011-01-01

    The QUENCH-03 test was performed on the 21. of January 1999 at FZK (Forschungszentrum Karlsruhe) to investigate the behaviour on reflood of PWR (Pressurized Water Reactor) fuel rods with little oxidation. This paper presents the results of the simulation of QUENCH-03 performed with the version V1.3 of the integral code ASTEC (Accident Source Term Evaluation Code) which is being developed by IRSN (France) in cooperation with GRS (Germany) and with the program version 2.1A of the mechanistic code ATHLET-CD (Analysis of Thermal-hydraulics of Leaks and Transients - Core Degradation) which is under development by GRS. At first the QUENCH test facility and the QUENCH test program in general are described. The test conduct of the test QUENCH-03 follows as well as a description of the used codes ASTEC and ATHLET-CD with the associated modeling of the test section. The results of this calculation show that during the heat-up and transient phase both codes can calculate bundle and shroud temperatures as well as the hydrogen production in good approximation to the experimental data. During the quench phase and up to the end of the test only the oxidation model PRATER of ASTEC simulates the hydrogen production very well, the other oxidation models of ASTEC cannot calculate to some extent the measured amount of hydrogen. ATHLET-CD underestimates the integral amount at the end of the test. In the ASTEC calculations the temperatures during the quench phase show qualitatively good results, only time delays on some elevations of the bundle could be noticed. ATHLET-CD reproduces the thermal behaviour up to the first temperature escalation very well, after that the temperatures are partly over-estimated. The time delay recognized in the ASTEC calculations are seen as well. The results of the integral code ASTEC emphasize that the calculation of QUENCH-03 is possible and leading to good results concerning hydrogen release and corresponding temperatures. Because the QUENCH-03 test was

  13. Evaluation of Thermal Load to the Lower Head Vessel Using the ASTEC Computer Code

    Park, Raejoon; Ahn, Kwangil

    2013-01-01

    The thermal load from the corium to the lower head vessel in the APR (Advanced Power reactor) 1400 during a small break loss of coolant accident (SBLOCA) without a safety injection (SI) has been evaluated using the ASTEC (Accident Source Term Evaluation Code) computer code, which has been developed as a part of the EU (European Union)-SARNET (Severe Accident Research NET work) program. The ASTEC results predict that the reactor vessel did not fail by using an ERVC, in spite of the large melting of the reactor vessel wall in a two-layer formation case of the SBLOCA in the APR1400. The outer surface conditions of the temperature and heat transfer coefficient are not effective on the vessel geometry change, which are preliminary results. A more detailed analysis of the main parameter effects on the corium behavior in the lower plenum is necessary to evaluate the IVR-ERVC in the APR1400, in particular, for a three-layer formation of the TLFW. Comparisons of the present results with others are necessary to verify and apply them to the actual IVR-ERVC evaluation in the APR1400

  14. Study on severe accidents and countermeasures for WWER-1000 reactors using the integral code ASTEC

    Tusheva, P.; Schaefer, F.; Altstadt, E.; Kliem, S.; Reinke, N.

    2011-01-01

    The research field focussing on the investigations and the analyses of severe accidents is an important part of the nuclear safety. To maintain the safety barriers as long as possible and to retain the radioactivity within the airtight premises or the containment, to avoid or mitigate the consequences of such events and to assess the risk, thorough studies are needed. On the one side, it is the aim of the severe accident research to understand the complex phenomena during the in- and ex-vessel phase, involving reactor-physics, thermal-hydraulics, physicochemical and mechanical processes. On the other side the investigations strive for effective severe accident management measures. This paper is focused on the possibilities for accident management measures in case of severe accidents. The reactor pressure vessel is the last barrier to keep the molten materials inside the reactor, and thus to prevent higher loads to the containment. To assess the behaviour of a nuclear power plant during transient or accident conditions, computer codes are widely used, which have to be validated against experiments or benchmarked against other codes. The analyses performed with the integral code ASTEC cover two accident sequences which could lead to a severe accident: a small break loss of coolant accident and a station blackout. The results have shown that in case of unavailability of major active safety systems the reactor pressure vessel would ultimately fail. The discussed issues concern the main phenomena during the early and late in-vessel phase of the accident, the time to core heat-up, the hydrogen production, the mass of corium in the reactor pressure vessel lower plenum and the failure of the reactor pressure vessel. Additionally, possible operator's actions and countermeasures in the preventive or mitigative domain are addressed. The presented investigations contribute to the validation of the European integral severe accidents code ASTEC for WWER-1000 type of reactors

  15. Fission-product release modelling in the ASTEC integral code: the status of the ELSA module

    Plumecocq, W.; Kissane, M.P.; Manenc, H.; Giordano, P.

    2003-01-01

    Safety assessment of water-cooled nuclear reactors encompasses potential severe accidents where, in particular, the release of fission products (FPs) and actinides into the reactor coolant system (RCS) is evaluated. The ELSA module is used in the ASTEC integral code to model all releases into the RCS. A wide variety of experiments is used for validation: small-scale CRL, ORNL and VERCORS tests; large-scale Phebus-FP tests; etc. Being a tool that covers intact fuel and degraded states, ELSA is being improved maximizing the use of information from degradation modelling. Short-term improvements will include some treatment of initial FP release due to intergranular inventories and implementing models for release of additional structural materials (Sn, Fe, etc.). (author)

  16. SARNET, a success story. Survey of major achievements on severe accidents and of knowledge capitalization within the ASTEC code

    Albiol, T.; Van Dorsselaere, J.P.; Reinke, N.

    2013-01-01

    51 organizations from Europe and Canada cooperated within SARNET (Severe Accident Research Network of Excellence) joining their capacities of research in order to resolve the most important pending issues for enhancing, in regard to Severe Accidents (SA), the safety of existing and future Nuclear Power Plants (NPPs). SARNET defines common research programmes and develops common computer codes and methodologies for safety assessment. The ASTEC integral code, jointly developed by IRSN (France) and GRS (Germany) for Light Water Reactor (LWR) source term SA evaluation, Probabilistic Safety Assessment (PSA) level-2 studies and SA management evaluation, is the main integrating component of SARNET. The scientific knowledge generated in the Corium, Source Term and Containment Topics has been integrated into the code through improved or new physical models. ASTEC constitutes now the reference European SA integral code. During the 4 and half years of SARNET, 30 partners have assessed the successive versions of the ASTEC V1 code through validation. More than 60 scientists have been trained on the code use. Validation tasks on about 65 experiments were performed to cover all physical phenomena occurring in a severe accident: circuit thermalhydraulic, core degradation, fission products (FP) release and transport, Molten-Corium-Concrete-Interaction (MCCI), and in the containment, thermalhydraulic, aerosol and iodine as well as hydrogen behaviour. The overall status of validation can be considered as good, with results often close to results of mechanistic codes. Some reach the limits of present knowledge, for instance on MCCI, and, like in most codes, an adequate model for reflooding of a degraded core is still missing. IRSN and GRS are currently preparing the new series of ASTEC V2 versions that will account for most of the needs of evolution expressed by the SARNET partners. The first version V2.0, planned for March 09, will be applicable to EPR and will include the ICARE2

  17. Comparative severe accident analysis of WWER 1000/B 320 LOCA DN100 computed by computer codes ASTEC V1.1 and SCDAP/RELAP5

    Kalchev, B.; Dimov, D.; Tusheva, P.; Mladenov, I.

    2005-01-01

    This paper presents the modelling approach for LOCA 100 mm sequence for WWER 1000-B 320 type of reactor with the integral ASTEC computer code and SCDAP/RELAP5 computer code. As a basic input deck the reference input file for Balakovo NPP from the released ASTEC CD has been applied. As a first part of the calculations for the SBLOCA sequence the ASTEC v1.1 modules CESAR, DIVA and CPA have been activated in a coupled mode. For SCDAP/RELAP5 calculation input deck for WWER 1000-B 320 has been applied which meant to be closer to the initial boundary conditions applied for ASTEC WWER 1000 input deck. A SBLOCA 100 mm comparison between ASTEC v1.1 and SCADAP/RELAP5 has been presented. ASTEC predicts vessel failure at 15620 s. ASTEC and SCDAP/RELAP5 give close but not similar results - this could be observed on the trends. The comparison of 100 mm-break shows that SCDAP/RELAP5 predicts clear phenomenological changes in primary pressure evolution and molten pool formation. Similar hydrogen production mass for both codes around 5000 s is detected

  18. Validation of ASTEC v1.0 computer code against FPT2 test

    Mladenov, I.; Tusheva, P.; Kalchev, B.; Dimov, D.; Ivanov, I.

    2005-01-01

    The aim of the work is by various nodalization schemes of the model to investigate the ASTEC v1.0 computer code sensitivity and to validate the code against PHEBUS - FPT2 experiment. This code is used for severe accident analysis. The aim corresponds to the main technical objective of the experiment which is to contribute to the validation of models and computer codes to be used for the calculation of the source term in case of a severe accident in a Light Water Reactor. The objective's scope of the FPT2 is large - separately for the bundle, the experimental circuit and the containment. Additional objectives are to characterize aerosol sizing and deposition processes, and also potential FP poisoning effects on hydrogen recombiner coupons exposed to containment atmospheric conditions representative of a LWR severe accident. The analyses of the results of the performed calculations show a good accordance with the reference case calculations, and then with the experimental data. Some differences in the calculations for the thermal behavior appear locally during the oxidation phase and the heat-up phase. There is very good confirmation regarding the volatile and semi-volatile fission products release from the fuel pellets. Important for analysis of the process is the final axial distribution of the mass of fuel relocation obtained at the end of the calculation

  19. Simulation of experiment on aerosol behaviour at severe accident conditions in the LACE experimental facility with the ASTEC CPA code

    Kljenak, I.; Mavko, B.

    2007-01-01

    The experiment LACE LA4 on thermal-hydraulics and aerosol behavior in a nuclear power plant containment, which was performed in the LACE experimental facility, was simulated with the ASTEC CPA module of the severe accident computer code ASTEC V1.2. The specific purpose of the work was to assess the capability of the module (code) to simulate thermal-hydraulic conditions and aerosol behavior in the containment of a light-water-reactor nuclear power plant at severe accident conditions. The test was simulated with boundary conditions, described in the experiment report. Results of thermal-hydraulic conditions in the test vessel, as well as dry aerosol concentrations in the test vessel atmosphere, are compared to experimental results and analyzed. (author)

  20. Assessment of capability for modeling the core degradation in 2D geometry with ASTEC V2 integral code for VVER type of reactor

    Dimov, D.

    2011-01-01

    The ASTEC code is progressively becoming the reference European severe accident integral code through in particular the intensification of research activities carried out since 2004. The purpose of this analysis is to assess ASTEC code modelling of main phenomena arising during hypothetical severe accidents and particularly in-vessel degradation in 2D geometry. The investigation covers both early and late phase of degradation of reactor core as well as determination of corium which will enter the reactor cavity. The initial event is station back-out. In order to receive severe accident condition, failure of all active component of emergency core cooling system is apply. The analysis is focus on ICARE module of ASTEC code and particularly on so call MAGMA model. The aim of study is to determine the capability of the integral code to simulate core degradation and to determine the corium composition entering the reactor cavity. (author)

  1. Simulation of VVER MCCI reactor test case with ASTEC V2/MEDICIS computer code

    Stefanova, A.; Grudev, P.; Gencheva, R.

    2011-01-01

    This paper presents an application of the ASTEC v2, module MEDICIS for simulation of VVER Molten core concrete interaction test (MCCI) case without water injection. The main purpose of performed calculation is verification and improvement of module MEDICIS/ASTECv2 for better simulation of core concrete interaction processes. The VVER-1000 reference nuclear power plant was chosen as SARNET2 benchmark MCCI test-case. The initial conditions for MCCI test are taken after SBO scenario calculated with ASTEC version 1.3R2 by INRNE. (authors)

  2. First analysis of AGS0, LT2 and E9 CABRI tests with the new SFR safety code ASTEC-Na

    Perez-Martin, Sara; Bandini, Giacomino; Matuzas, Vaidas; Buck, Michael; Girault, Nathalie

    2015-01-01

    Within the framework of the European JASMIN project, the ASTEC-Na code is being developed for safety analysis of severe accidents in SFR. In the first phase of validation of the ASTEC-Na fuel thermo-mechanical models three in-pile tests conducted in the CABRI experimental reactor have been selected to be analysed. We present here the preliminary results of the simulation of two Transient Over Power tests and one power ramp test (AGS0, LT2 and E9, respectively) where no pin failure occurred during the transient. We present the comparison of ASTEC-Na results against experimental data and other safety code results for the initial steady state conditions prior to the transient onset as well as for the fuel pin behaviour during the transients. (author)

  3. Assessment of the integral code ASTEC with respect to the late in-vessel phase of core degradation

    D'Alessandro, Christophe; Starflinger, Joerg

    2014-01-01

    The integral code ASTEC is being developed jointly by GRS and IRSN as the European reference code for severe accidents. In the EU project CESAM it is foreseen to assess the capabilities of ASTEC to deal with a broad range of reactor designs (PWR, BWR, VVER, CANDU, Gen III+, etc.) and especially to model and capture the effect of severe accident mitigation measures. This requires a physically sound and sufficiently accurate modelling of the processes and phenomena that govern the course of the accident, and the modelling has to be validated to a sufficient extent. Concentrating on the in-vessel aspects of severe accidents, the present paper addresses these requirements by presenting results of ASTEC calculations for relevant experiments that cover the major physical phenomena during core degradation (melting and relocation of the fuel, oxidation, molten corium pool formation and its coolability in the lower plenum once it slumped from the core region). The assessment of models for bundle degradation is based on CORA (13 and W2). CORA represented a bundle of non-irradiated, electrically heated UO 2 -rods. Melt progression in strongly degraded geometry is addressed in the PHEBUS-FTP4 experiment carried out with irradiated fuel in debris bed configuration. The validation of molten pool modelling is based on BALI and RASPLAV-Salt experiments. The BALI-facility consists of a full-scale slice of lower plenum (allowing experiments at prototypical Rayleigh numbers) and utilizes uniformly heated water as simulant for corium. The RASPLAV experiments use a scaled-down slice of the lower head. Use of non-eutectic molten salt as simulant should address the effect of a significant solidification range typical for real corium. Calculation results of ASTEC are discussed in comparison with experimental measurements. Further, questions concerning the extrapolation of findings from validation to reactor application are critically discussed, concerning e.g. choice of model parameters

  4. Validation of CESAR Thermal-hydraulic Module of ASTEC V1.2 Code on BETHSY Experiments

    Tregoures, Nicolas; Bandini, Giacomino; Foucher, Laurent; Fleurot, Joëlle; Meloni, Paride

    The ASTEC V1 system code is being jointly developed by the French Institut de Radioprotection et Sûreté Nucléaire (IRSN) and the German Gesellschaft für Anlagen und ReaktorSicherheit (GRS) to address severe accident sequences in a nuclear power plant. Thermal-hydraulics in primary and secondary system is addressed by the CESAR module. The aim of this paper is to present the validation of the CESAR module, from the ASTEC V1.2 version, on the basis of well instrumented and qualified integral experiments carried out in the BETHSY facility (CEA, France), which simulates a French 900 MWe PWR reactor. Three tests have been thoroughly investigated with CESAR: the loss of coolant 9.1b test (OECD ISP N° 27), the loss of feedwater 5.2e test, and the multiple steam generator tube rupture 4.3b test. In the present paper, the results of the code for the three analyzed tests are presented in comparison with the experimental data. The thermal-hydraulic behavior of the BETHSY facility during the transient phase is well reproduced by CESAR: the occurrence of major events and the time evolution of main thermal-hydraulic parameters of both primary and secondary circuits are well predicted.

  5. Simulation of KAEVER experiments on aerosol behavior in a nuclear power plant containment at accident conditions with the ASTEC code

    Kljenak, I.; Mavko, B.

    2006-01-01

    Experiments on aerosol behaviour in saturated and non-saturated atmosphere, which were performed in the KAEVER experimental facility, were simulated with the severe accident computer code ASTEC CPA V1.2. The specific purpose of the work was to assess the capability of the code to model aerosol condensation and deposition in the containment of a light-water-reactor nuclear power plant at severe accident conditions, if the atmosphere saturation conditions are simulated adequately. Five different tests were first simulated with boundary conditions, obtained from the experiments. In all five tests, a non-saturated atmosphere was simulated, although, in four tests, the atmosphere was allegedly saturated. The simulations were repeated with modified boundary conditions, to obtain a saturated atmosphere in all tests. Results of dry and wet aerosol concentrations in the test vessel atmosphere for both sets of simulations are compared to experimental results. (author)

  6. Reactor cooling systems thermal-hydraulic assessment of the ASTEC V1.3 code in support of the French IRSN PSA-2 on the 1300 MWe PWRs

    Tregoures, Nicolas; Philippot, Marc; Foucher, Laurent; Guillard, Gaetan; Fleurot, Joelle

    2010-01-01

    The French Institut de Radioprotection et de Surete Nucleaire (IRSN) is performing a level 2 Probabilistic Safety Assessment (PSA-2) on the French 1300 MWe PWRs. This PSA-2 study is relying on the ASTEC integral computer code, jointly developed by IRSN and GRS (Germany). In order to assess the reliability and the quality of physical results of the ASTEC V1.3 code as well as the PWR 1300 MWe reference input deck, a wide-ranging series of comparisons with the French best-estimate thermal-hydraulic code CATHARE 2 V2.5 has been performed on 14 different severe-accident scenarios. The present paper details 4 out of the 14 studied scenarios: a 12-in. cold leg Loss of Coolant Accident (LOCA), a 2-tube Steam Generator Tube Rupture (SGTR), a 12-in. Steam Line Break (SLB) and a total Loss of Feed Water scenario (LFW). The thermal-hydraulic behavior of the primary and secondary circuits is thoroughly investigated and compared to the CATAHRE 2 V2.5 results. The ASTEC results of the core degradation phase are also presented. Overall, the thermal-hydraulic behavior given by the ASTEC V1.3 is in very good agreement with the CATHARE 2 V2.5 results.

  7. Review of current severe accident management approaches in Europe and identification of related modelling requirements for the computer code ASTEC V2.1

    Hermsmeyer, S.

    2015-01-01

    The severe accident at the Fukushima-Daiichi nuclear power plant (NPP) has led to a worldwide review of nuclear safety approaches and is bringing a refocussing of R and D in the field. To support these efforts several new Euratom FP7 projects have been launched. The CESAM project focuses on the improvement of the ASTEC computer code. ASTEC is jointly developed by IRSN and GRS and is considered as the European reference code for Severe Accident Analyses since it capitalizes knowledge from the extensive Euro-pean R and D in the field. The project aims at the code's enhancement and extension for use in Severe Accident Management (SAM) analysis of the NPPs of Generation II-III presently under operation or foreseen in the near future in Europe, spent fuel pools included. The work reported here is concerned with the importance, for the further development of the code, of SAM strategies to be simulated. To this end, SAM strategies applied in the EU have been compiled. This compilation is mainly based on the public information made available in the frame of the EU ''stress tests'' for NPPs and has been complemented by information pro-vided by the different CESAM partners. The context of SAM is explained and the strategies are presented. The modelling capabilities for the simulation of these strategies in the current production version 2.0 of ASTEC are discussed. Furthermore, the requirements for the next version of ASTEC V2.1 that is supported in the CESAM project are highlighted. They are a necessary complement to the list of code improvements that is drawn from consolidating new fields of application, like SFP and BWR model enhancements, and from new experimental results on severe accident phenomena.

  8. Review of current severe accident management approaches in Europe and identification of related modelling requirements for the computer code ASTEC V2.1

    Hermsmeyer, S. [European Commission JRC, Petten (Netherlands). Inst. for Energy and Transport; Herranz, L.E.; Iglesias, R. [CIEMAT, Madrid (Spain); and others

    2015-07-15

    The severe accident at the Fukushima-Daiichi nuclear power plant (NPP) has led to a worldwide review of nuclear safety approaches and is bringing a refocussing of R and D in the field. To support these efforts several new Euratom FP7 projects have been launched. The CESAM project focuses on the improvement of the ASTEC computer code. ASTEC is jointly developed by IRSN and GRS and is considered as the European reference code for Severe Accident Analyses since it capitalizes knowledge from the extensive Euro-pean R and D in the field. The project aims at the code's enhancement and extension for use in Severe Accident Management (SAM) analysis of the NPPs of Generation II-III presently under operation or foreseen in the near future in Europe, spent fuel pools included. The work reported here is concerned with the importance, for the further development of the code, of SAM strategies to be simulated. To this end, SAM strategies applied in the EU have been compiled. This compilation is mainly based on the public information made available in the frame of the EU ''stress tests'' for NPPs and has been complemented by information pro-vided by the different CESAM partners. The context of SAM is explained and the strategies are presented. The modelling capabilities for the simulation of these strategies in the current production version 2.0 of ASTEC are discussed. Furthermore, the requirements for the next version of ASTEC V2.1 that is supported in the CESAM project are highlighted. They are a necessary complement to the list of code improvements that is drawn from consolidating new fields of application, like SFP and BWR model enhancements, and from new experimental results on severe accident phenomena.

  9. ASTEC validation on PANDA SETH

    Bentaib, Ahmed; Bleyer, Alexandre; Schwarz, Siegfried

    2009-01-01

    The ASTEC code development by IRSN and GRS is aimed to provide an integral code for the simulation of the whole course of severe accidents in Light-Water Reactors. ASTEC is a complex system of codes for reactor safety assessment. In this validation, only the thermal-hydraulic module of ASTEC code is used. ASTEC is a lumped-parameter code able to represent multi-compartment containments. It uses the following main elements: zones (compartments), junctions (liquids and atmospherics) and structures. The zones are connected by junctions and contain steam, water and non condensable gases. They exchange heat with structures by different heat transfer regimes: convection, radiation and condensation. This paper presents the validation of ASTEC V1.3 on the tests T9 and T9bis, of the PANDA OECD/SETH experimental program, investigating the impact of injection velocity and steam condensation on the plume shape and on the gas distribution. Dedicated meshes were developed to simulate the test facility with the two vessels DW1, DW2 and the interconnection pipe. The obtained numerical results are analyzed and compared to the experiments. The comparison shows a good agreement between experiments and calculations. (author)

  10. Simulation of the containment spray system test PACOS PX2.2 with the integral code ASTEC and the containment code system COCOSYS

    Risken, Tobias; Koch, Marco K.

    2011-01-01

    The reactor safety research contains the analysis of postulated accidents in nuclear power plants (npp). These accidents may involve a loss of coolant from the nuclear plant's reactor coolant system, during which heat and pressure within the containment are increased. To handle these atmospheric conditions, containment spray systems are installed in various light water reactors (LWR) worldwide as a part of the accident management system. For the improvement and the safety ensurance in npp operation and accident management, numeric simulations of postulated accident scenarios are performed. The presented calculations regard the predictability of the containment spray system's effect with the integral code ASTEC and the containment code system COCOSYS, performed at Ruhr-Universitaet Bochum. Therefore the test PACOS Px2.2 is simulated, in which water is sprayed in the stratified containment atmosphere of the BMC (Battelle Modell-Containment). (orig.)

  11. Air oxidation of Zircaloy-4 in the 600-1000 °C temperature range: Modeling for ASTEC code application

    Coindreau, O.; Duriez, C.; Ederli, S.

    2010-10-01

    Progress in the treatment of air oxidation of zirconium in severe accident (SA) codes are required for a reliable analysis of severe accidents involving air ingress. Air oxidation of zirconium can actually lead to accelerated core degradation and increased fission product release, especially for the highly-radiotoxic ruthenium. This paper presents a model to simulate air oxidation kinetics of Zircaloy-4 in the 600-1000 °C temperature range. It is based on available experimental data, including separate-effect experiments performed at IRSN and at Forschungszentrum Karlsruhe. The kinetic transition, named "breakaway", from a diffusion-controlled regime to an accelerated oxidation is taken into account in the modeling via a critical mass gain parameter. The progressive propagation of the locally initiated breakaway is modeled by a linear increase in oxidation rate with time. Finally, when breakaway propagation is completed, the oxidation rate stabilizes and the kinetics is modeled by a linear law. This new modeling is integrated in the severe accident code ASTEC, jointly developed by IRSN and GRS. Model predictions and experimental data from thermogravimetric results show good agreement for different air flow rates and for slow temperature transient conditions.

  12. Comparative accident analyses for a KONVOI-type PWR using the integral codes ASTEC V1.33 and MELCOR 1.8.6

    Reinke, Nils; Erdmann, Walter; Nowack, Holger; Sonnenkalb, Martin

    2010-08-01

    In the frame of the project RS1180 funded by the German Federal Ministry for Economics and Technology (BMWi) calculations have been carried out with the integral code ASTEC V1.33 p3 developed by GRS for two postulated accidents in a nuclear power plant with KONVOI type a pressurized water reactor and compared to calculations with MELCOR 1.8.6 YU. Major objective was to assess the capability of ASTEC for application in level 2 probabilistic safety analyses (PSA). In particular, it was investigated to which extent ASTEC is able to perform such integral calculations meeting criteria with regard to both reasonable calculation time and specific boundary conditions necessary for PSA analyses. Two exemplary accidents were selected: - A transient with loss of steam generator feed water, - A small break loss of coolant accident (50 cm 2 ) in the cold leg of the coolant line connected to the pressurizer. In principle, the results demonstrate the capability of ASTEC V1.33 to carry out such PSA level 2 calculations. In addition, it has to be noted that for both ASTEC and MELCOR the requirements in view of the quality of the results leads to prolonged calculation times due to more detailed nodalisations of the whole plant. This is valid for the core region as well as for the primary circuit and for the containment. Consequently, calculation times in the order of one day to two weeks are accomplished, thereby excluding extensive parameter analyses in order to assess the sensitivity of the calculation results. Concerning the quality of the results a good agreement can be stated between ASTEC and MELCOR results in terms of global data. In detail some results are sensitive to user effects. Here, the nodalisation seems to be of major influence besides differences in modeling specific phenomena. The comparison suggests that in particular the influence of the nodalisation defined by the user and depending on the user's experience should be carefully evaluated. Since some

  13. On-going activities in the European JASMIN project for the development and validation of ASTEC-Na SFR safety simulation code - 15072

    Girault, N.; Cloarec, L.; Herranz, L.; Bandini, G.; Perez-Martin, S.; Ammirabile, L.

    2015-01-01

    The 4-year JASMIN collaborative project (Joint Advanced Severe accidents Modelling and Integration for Na-cooled fast reactors), started in Dec.2011 in the frame of the 7. Framework Programme of the European Commission. It aims at developing a new European simulation code, ASTEC-Na, dealing with the primary phase of SFR core disruptive accidents. The development of a new code, based on a robust advanced simulation tool and able to encompass the in-vessel and in-containment phenomena occurring during a severe accident is indeed of utmost interest for advanced and innovative future SFRs for which an enhanced safety level will be required. This code, based on the ASTEC European code system developed by IRSN and GRS for severe accidents in water-cooled reactors, is progressively integrating and capitalizing the state-of-the-art knowledge of SFR accidents through physical model improvement or development of new ones. New models are assessed on in-pile (CABRI, SCARABEE etc...) and out-of pile experiments conducted during the 70's-80's and code-o-code benchmarking with current accident simulation tools for SFRs is also conducted. During the 2 and a half first years of the project, model specifications and developments were conducted and the validation test matrix was built. The first version of ASTEC-Na available in early 2014 already includes a thermal-hydraulics module able to simulate single and two-phase sodium flow conditions, a zero point neutronic model with simple definition of channel and axial dependences of reactivity feedbacks and models derived from SCANAIR IRSN code for simulating fuel pin thermo-mechanical behaviour and fission gas release/retention. Meanwhile, models have been developed in the source term area for in-containment particle generation and particle chemical transformation, but their implementation is still to be done. As a first validation step, the ASTEC-Na calculations were satisfactorily compared to thermal-hydraulics experimental

  14. Application of ASTEC, MELCOR, and MAAP Computer Codes for Thermal Hydraulic Analysis of a PWR Containment Equipped with the PCFV and PAR Systems

    Siniša Šadek

    2017-01-01

    Full Text Available The integrity of the containment will be challenged during a severe accident due to pressurization caused by the accumulation of steam and other gases and possible ignition of hydrogen and carbon monoxide. Installation of a passive filtered venting system and passive autocatalytic recombiners allows control of the pressure, radioactive releases, and concentration of flammable gases. Thermal hydraulic analysis of the containment equipped with dedicated passive safety systems after a hypothetical station blackout event is performed for a two-loop pressurized water reactor NPP with three integral severe accident codes: ASTEC, MELCOR, and MAAP. MELCOR and MAAP are two major US codes for severe accident analyses, and the ASTEC code is the European code, joint property of Institut de Radioprotection et de Sûreté Nucléaire (IRSN, France and Gesellschaft für Anlagen und Reaktorsicherheit (GRS, Germany. Codes’ overall characteristics, physics models, and the analysis results are compared herein. Despite considerable differences between the codes’ modelling features, the general trends of the NPP behaviour are found to be similar, although discrepancies related to simulation of the processes in the containment cavity are also observed and discussed in the paper.

  15. Stand-Alone Containment Analysis of the Phébus FPT Tests with the ASTEC and the MELCOR Codes: The FPT-0 Test

    Bruno Gonfiotti

    2017-01-01

    Full Text Available The integral Phébus tests were probably one of the most important experimental campaigns performed to investigate the progression of severe accidents in light water reactors. In these tests, the degradation of a PWR fuel bundle was investigated employing different control rod materials and burn-up levels in strongly or weakly oxidizing conditions. From the results of such tests, numerical codes such as ASTEC and MELCOR have been developed to describe the evolution of a severe accident. After the termination of the experimental Phébus campaign, these two codes were furthermore expanded. Therefore, the aim of the present work is to reanalyze the first Phébus test (FPT-0 employing the updated ASTEC and MELCOR versions to ensure that the new improvements introduced in such codes allow also a better prediction of these Phébus tests. The analysis focuses on the stand-alone containment aspects of this test, and the paper summarizes the main thermal-hydraulic results and presents different sensitivity analyses carried out on the aerosols and fission products behavior. This paper is part of a series of publications covering the four executed Phébus tests employing a solid PWR fuel bundle: FPT-0, FPT-1, FPT-2, and FPT-3.

  16. Comparative analysis of the results obtained by computer code ASTEC V2 and RELAP 5.3.2 for small leak ID 80 for VVER 1000

    Atanasova, B.; Grudev, P.

    2011-01-01

    The purpose of this report is to present the results obtained by simulation and subsequent analysis of emergency mode for small leak with ID 80 for WWER 1000/B320 - Kozloduy NPP Units 5 and 6. Calculations were performed with the ASTEC v2 computer code used for calculation of severe accident, which was designed by French and German groups - IRSN and GRS. Integral RELAP5 computer code is used as a reference for comparison of results. The analyzes are focused on the processes occurring in reactor internals phase of emergency mode with significant core damage. The main thermohydraulic parameters, start of reactor core degradation and subsequent fuel relocalization till reactor vessel failure are evaluated in the analysis. RELAP5 computer code is used as a reference code to compare the results obtained till early core degradation that occurs after core stripping and excising of fuel temperature above 1200 0 C

  17. ASTEC validation on PANDA SETH

    Bentaib, A.; Bleyer, A.

    2011-01-01

    The ASTEC code (jointly developed by IRSN and GRS, i.e. Gesellschaft fur Anlagen- und Reaktorsicherheit mbH) development is aimed to provide an integral code for the simulation of the whole course of severe accidents in Light-Water Reactors. ASTEC is a complex system of codes for reactor safety assessment. In this benchmark, only the CPA (Containment Part of ASTEC) module is used. CPA is a lumped-parameter able to represent a multi-compartments containment. It used the following main elements: zones (compartments), junctions (liquids and atmospherics) and structures. The zones are connected by junctions and contain steam, water and non condensable gases. They exchange heat with structures by different heat transfer regimes: convection, radiation and condensation. In this paper, three selected from the PANDA SETH Benchmark 9, 9bis and 25 are considered to investigate the impact of injection velocity and steam condensation on the plume shape and on gas distribution. Coarse and fine meshes were developed by considering the test facility with the two vessels DW1, DW2, and the interconnection pipe. The obtained numerical results are analyzed and compared to the experiments. The comparison shows the good agreement between experiments and calculations. (author)

  18. The integral analysis of 40 mm diameter pipe rupture in cooling system of fusion facility W7-X with ASTEC code

    Kačegavičius, Tomas, E-mail: Tomas.Kacegavicius@lei.lt; Povilaitis, Mantas, E-mail: Mantas.Povilaitis@lei.lt

    2015-12-15

    Highlights: • The analysis of loss-of-coolant accident (LOCA) in W7-X facility. • Burst disc is sufficient to prevent pressure inside the plasma vessel exceeding 110 kPa. • Developed model of the cooling system adequately represents the expected phenomena. - Abstract: Fusion is the energy production technology, which could potentially solve problems with growing energy demand of population in the future. Wendelstein 7-X (W7-X) is an experimental facility of stellarator type, which is currently being built at the Max-Planck-Institute for Plasmaphysics located in Greifswald, Germany. W7-X shall demonstrate that in future the energy could be produced in such type of fusion reactors. The safety analysis is required before the operation of the facility could be started. A rupture of 40 mm diameter pipe, which is connected to the divertor unit (module for plasma cooling) to ensure heat removal from the vacuum vessel in case of no-plasma operation mode “baking” is one of the design basis accidents to be investigated. During “baking” mode the vacuum vessel structures and working fluid – water are heated to the temperature 160 °C. This accident was selected for the detailed analysis using integral code ASTEC, which is developed by IRSN (France) and GRS mbH (Germany). This paper presents the integral analysis of W7-X response to a selected accident scenario. The model of the main cooling circuit and “baking” circuit was developed for ASTEC code. There were analysed two cases: (1) rupture of a pipe connected to the upper divertor unit and (2) rupture of a pipe connected to the lower divertor unit. The results of analysis showed that in both cases the water is almost completely released from the units into the plasma vessel. In both cases the pressure in the plasma vessel rapidly increases and in 28 s the set point for burst disc opening is reached preventing further pressurisation.

  19. Enhancement of ASTEC and COCOSYS regarding fission product release during MCCI

    Agethen, Kathrin [Bochum Univ. (Germany). Reactor Simulation and Safety Group

    2016-10-15

    The focus in this paper is on the enhancement of the fission product release model during molten core concrete interaction in the severe accident analysis codes ASTEC and COCOSYS. After both codes are harmonised and the model interaction as well as the input parameters are adapted, extended model approaches are implemented. These lead to an improvement of the release rates for selected semi-volatile species validated against the ACE tests under ex-vessel conditions.

  20. Stand-alone containment analysis of Phébus FPT tests with ASTEC and MELCOR codes: the FPT-2 test.

    Gonfiotti, Bruno; Paci, Sandro

    2018-03-01

    During the last 40 years, many studies have been carried out to investigate the different phenomena occurring during a Severe Accident (SA) in a Nuclear Power Plant (NPP). Such efforts have been supported by the execution of different experimental campaigns, and the integral Phébus FP tests were probably some of the most important experiments in this field. In these tests, the degradation of a Pressurized Water Reactor (PWR) fuel bundle was investigated employing different control rod materials and burn-up levels in strongly or weakly oxidizing conditions. From the findings on these and previous tests, numerical codes such as ASTEC and MELCOR have been developed to analyze the evolution of a SA in real NPPs. After the termination of the Phébus FP campaign, these two codes have been furthermore improved to implement the more recent findings coming from different experimental campaigns. Therefore, continuous verification and validation is still necessary to check that the new improvements introduced in such codes allow also a better prediction of these Phébus tests. The aim of the present work is to re-analyze the Phébus FPT-2 test employing the updated ASTEC and MELCOR code versions. The analysis focuses on the stand-alone containment aspects of this test, and three different spatial nodalizations of the containment vessel (CV) have been developed. The paper summarizes the main thermal-hydraulic results and presents different sensitivity analyses carried out on the aerosols and fission products (FP) behavior. When possible, a comparison among the results obtained during this work and by different authors in previous work is also performed. This paper is part of a series of publications covering the four Phébus FP tests using a PWR fuel bundle: FPT-0, FPT-1, FPT-2, and FPT-3, excluding the FPT-4 one, related to the study of the release of low-volatility FP and transuranic elements from a debris bed and a pool of melted fuel.

  1. Stand-alone containment analysis of Phébus FPT tests with ASTEC and MELCOR codes: the FPT-2 test

    Bruno Gonfiotti

    2018-03-01

    Full Text Available During the last 40 years, many studies have been carried out to investigate the different phenomena occurring during a Severe Accident (SA in a Nuclear Power Plant (NPP. Such efforts have been supported by the execution of different experimental campaigns, and the integral Phébus FP tests were probably some of the most important experiments in this field. In these tests, the degradation of a Pressurized Water Reactor (PWR fuel bundle was investigated employing different control rod materials and burn-up levels in strongly or weakly oxidizing conditions. From the findings on these and previous tests, numerical codes such as ASTEC and MELCOR have been developed to analyze the evolution of a SA in real NPPs. After the termination of the Phébus FP campaign, these two codes have been furthermore improved to implement the more recent findings coming from different experimental campaigns. Therefore, continuous verification and validation is still necessary to check that the new improvements introduced in such codes allow also a better prediction of these Phébus tests. The aim of the present work is to re-analyze the Phébus FPT-2 test employing the updated ASTEC and MELCOR code versions. The analysis focuses on the stand-alone containment aspects of this test, and three different spatial nodalizations of the containment vessel (CV have been developed. The paper summarizes the main thermal-hydraulic results and presents different sensitivity analyses carried out on the aerosols and fission products (FP behavior. When possible, a comparison among the results obtained during this work and by different authors in previous work is also performed. This paper is part of a series of publications covering the four Phébus FP tests using a PWR fuel bundle: FPT-0, FPT-1, FPT-2, and FPT-3, excluding the FPT-4 one, related to the study of the release of low-volatility FP and transuranic elements from a debris bed and a pool of melted fuel. Keywords: Safety

  2. Progress and perspectives of ASTEC applications in the European Network SARNET

    Van Dorsselaere, J.P.; Allelein, H.J.; Neu, K.

    2006-01-01

    comparisons to be performed more intensively in the next period is to learn whether the differences on results are caused by differences in physical models and/or in safety systems modelling. These benchmarks will also more focus on fission products behaviour and on specific parts of the sequences such as MCCI. Concrete action plans and associated teams have been set up for BWR and CANDU model adaptation and future benchmarks. At short term, the code evolution will focus on feedback from PSA level2 1300 and SARNET ASTEC V1 applications, and on a new model of reflooding of a degraded core. The documentation will be largely improved, mainly users manuals and users guidelines. In parallel, the preparation by IRSN-GRS of a new family V2 of ASTEC versions has started. The general specifications will account for the needs as expressed by the SARNET users. ASTEC V2.0 is planned in 2008, where the ICARE2 IRSN mechanistic code will be the new core degradation module. This ASTEC version will include the EPR applicability, the simulation of external vessel cooling for new reactor designs and a full modelling of Ruthenium behaviour in circuit and containment for air ingress situations. Beyond, future evolutions of ASTEC code will act as repository of knowledge created in SARNET and in the international context. The possible use of ASTEC to analyse severe accident sequences in future reactors (Generation IV, ITER) is under consideration. (authors)

  3. Adaptive distributed source coding.

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  4. VVER-1000 small-medium break LOCAs predictions by ASTEC

    Georgieva, J.; Stefanova, A.; Atanasova, B.; Groudev, P.; Tusheva, P.; Mladenov, I.; Dimov, D.; Passalacqua, R.

    2005-01-01

    This paper deals with an assessment of ASTEC1.1v0 code in the simulation of small and medium break LOCAs (ranging from 30mm up to 70mm equivalent diameter). The reference power plant for this analysis is a VVER-1000/V320 (e.g. Units 5 and 6 at Kozloduy NPP). A preliminary comparison with MELCOR and RELAP-SCDAP severe accident codes will be discussed. This investigation has been performed in the framework of the SARNET project (under the Euratom 6 th framework program) by the FoBAUs group (Forum of Bulgarian ASTEC users). The FoBAUs group aims at the validation of the ASTEC code in the field of severe accidents. Future activities will target the ASTEC capability (as a PSA-level 2 tool) to simulate a large range of reactor accident scenarios with intervention of safety systems (either passive systems or operated by operators). The final target is to assess Severe Accident Management (SAM) procedures for VVER-1000 reactors. The ASTEC1.1v0 code version here used is the one released in June 2004 by the French IRSN (Institut de Radioprotection et de Surete Nucleaire) and the German GRS (Gesellschaft ReactorSicherheit mbH). (author)

  5. Adaptive decoding of convolutional codes

    K. Hueske

    2007-06-01

    Full Text Available Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  6. Adaptive decoding of convolutional codes

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  7. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Lei Ye

    2009-01-01

    Full Text Available This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are 1/2 and 1/3. The performances of both systems with high (10−2 and low (10−4 BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  8. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Burr Alister

    2009-01-01

    Full Text Available Abstract This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are and . The performances of both systems with high ( and low ( BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  9. ASTEC application to in-vessel corium retention

    Tarabelli, D.; Ratel, G.; Pelisson, R.; Guillard, G.; Barnak, M.; Matejovic, P.

    2009-01-01

    This paper summarizes the work done in the SARNET European Network of Excellence on Severe Accidents (6th Framework Programme of the European Commission) on the capability of the ASTEC code to simulate in-vessel corium retention (IVR). This code, jointly developed by the French Institut de Radioprotection et de Surete Nucleaire (IRSN) and the German Gesellschaft fuer Anlagen und Reaktorsicherheit mbH (GRS) for simulation of severe accidents, is now considered as the European reference simulation tool. First, the DIVA module of ASTEC code is briefly introduced. This module treats the core degradation and corium thermal behaviour, when relocated in the reactor lower head. Former ASTEC V1.2 version assumed a predefined stratified molten pool configuration with a metallic layer on the top of the volumetrically heated oxide pool. In order to reflect the results of the MASCA project, improved models that enable modelling of more general corium pool configurations were implemented by the CEA (France) into the DIVA module of the ASTEC V1.3 code. In parallel, the CEA was working on ASTEC modelling of the external reactor vessel cooling (ERVC). The capability of the ASTEC CESAR circuit thermal-hydraulics to simulate the ERVC was tested. The conclusions were that the CESAR module is capable of simulating this system although some numerical and physical instabilities can occur. Developments were then made on the coupling between both DIVA and CESAR modules in close collaboration with IRSN. In specific conditions, code oscillations remain and an analysis was made to reduce the numerical part of these oscillations. A comparison of CESAR results of the SULTAN experiments (CEA) showed an agreement on the pressure differences. The ASTEC V1.2 code version was applied to IVR simulation for VVER-440/V213 reactors assuming defined corium mass, composition and decay heat. The external cooling of reactor wall was simulated by applying imposed coolant temperature and heat transfer

  10. Analysis of the COLIMA CA-U3 test using the ELSA module of ASTEC

    Godin-Jacqmin, L.; Journeau, C.; Piluso, P.

    2006-01-01

    The main purpose of this study is to calculate the COLIMA CA-U3 experimental test with the ELSA module of the ASTEC code. This experimental test was performed to represent the fission product and structural material releases from a VVER-440 magma type configuration. Thus, some additional work is also done on test result analyses and corresponding ASTEC parameter usage to model as closely as possible the test configuration. Code results on fission product releases are compared to experimental results in a qualitative way for all elements that can be evaluated by the ASTEC code. Sensitivity cases are also performed on the gas flow rate carrying the fission product. (author)

  11. Rate-adaptive BCH codes for distributed source coding

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  12. ASTEC applications to VVER-440/V213 reactors

    Matejovic, Peter, E-mail: ivstt@nextra.sk; Barnak, Miroslav; Bachraty, Milan; Vranka, Lubomir

    2014-06-01

    Since the beginning of ASTEC development by IRSN and GRS the code was widely applied to VVER reactors. In this paper, at first specific features of VVER-440/V213 reactor design that are important from the modelling point of view are briefly described. Then the validation of ASTEC code with focus on its applicability to VVER reactors is briefly summarised and the results obtained with the ASTEC V2.0-rev1 version for the ISP-33 PACTEL natural circulation experiment are presented. In the next section the application of ASTEC V2.0-rev1 code in upgrade of VVER-440/V213 NPPs to cope with consequences of severe accidents is described. This upgrade includes adoption of in-vessel retention via external reactor vessel cooling and installation of large capacity passive autocatalytic recombiners. Results of analysis with focus on corium localisation and stabilisation inside reactor vessel, hydrogen control in confinement and prevention of long-term confinement pressurisation are presented.

  13. Application of ASTEC V2.0 to severe accident analyses for German KONVOI type reactors

    Nowack, H.; Erdmann, W.; Reinke, N.

    2011-01-01

    The integral code ASTEC is jointly developed by IRSN (Institut de Radioprotection et de Surete Nucleaire, France) and GRS (Germany). Its main objective is to simulate severe accident scenarios in PWRs from the initiating event up to the release of radioactive material into the environment. This paper describes the ASTEC modeling approach and the nodalisation of a KONVOI type PWR as an application example. Results from an integral severe accident study are presented and shortcomings as well as advantages are outlined. As a conclusion, the applicability of ASTEC V2.0 for deterministic severe accident analyses used for PSA level 2 and Severe Accident Management studies will be assessed. (author)

  14. Validation of ASTEC core degradation and containment models

    Kruse, Philipp; Brähler, Thimo; Koch, Marco K.

    2014-01-01

    Ruhr-Universitaet Bochum performed in a German funded project validation of in-vessel and containment models of the integral code ASTEC V2, jointly developed by IRSN (France) and GRS (Germany). In this paper selected results of this validation are presented. In the in-vessel part, the main point of interest was the validation of the code capability concerning cladding oxidation and hydrogen generation. The ASTEC calculations of QUENCH experiments QUENCH-03 and QUENCH-11 show satisfactory results, despite of some necessary adjustments in the input deck. Furthermore, the oxidation models based on the Cathcart–Pawel and Urbanic–Heidrick correlations are not suitable for higher temperatures while the ASTEC model BEST-FIT based on the Prater–Courtright approach at high temperature gives reliable enough results. One part of the containment model validation was the assessment of three hydrogen combustion models of ASTEC against the experiment BMC Ix9. The simulation results of these models differ from each other and therefore the quality of the simulations depends on the characteristic of each model. Accordingly, the CPA FRONT model, corresponding to the simplest necessary input parameters, provides the best agreement to the experimental data

  15. Continuous validation of ASTEC containment models and regression testing

    Nowack, Holger; Reinke, Nils; Sonnenkalb, Martin

    2014-01-01

    The focus of the ASTEC (Accident Source Term Evaluation Code) development at GRS is primarily on the containment module CPA (Containment Part of ASTEC), whose modelling is to a large extent based on the GRS containment code COCOSYS (COntainment COde SYStem). Validation is usually understood as the approval of the modelling capabilities by calculations of appropriate experiments done by external users different from the code developers. During the development process of ASTEC CPA, bugs and unintended side effects may occur, which leads to changes in the results of the initially conducted validation. Due to the involvement of a considerable number of developers in the coding of ASTEC modules, validation of the code alone, even if executed repeatedly, is not sufficient. Therefore, a regression testing procedure has been implemented in order to ensure that the initially obtained validation results are still valid with succeeding code versions. Within the regression testing procedure, calculations of experiments and plant sequences are performed with the same input deck but applying two different code versions. For every test-case the up-to-date code version is compared to the preceding one on the basis of physical parameters deemed to be characteristic for the test-case under consideration. In the case of post-calculations of experiments also a comparison to experimental data is carried out. Three validation cases from the regression testing procedure are presented within this paper. The very good post-calculation of the HDR E11.1 experiment shows the high quality modelling of thermal-hydraulics in ASTEC CPA. Aerosol behaviour is validated on the BMC VANAM M3 experiment, and the results show also a very good agreement with experimental data. Finally, iodine behaviour is checked in the validation test-case of the THAI IOD-11 experiment. Within this test-case, the comparison of the ASTEC versions V2.0r1 and V2.0r2 shows how an error was detected by the regression testing

  16. Adaptive RAC codes employing statistical channel evaluation ...

    An adaptive encoding technique using row and column array (RAC) codes employing a different number of parity columns that depends on the channel state is proposed in this paper. The trellises of the proposed adaptive codes and a statistical channel evaluation technique employing these trellises are designed and ...

  17. Overview of the independent ASTEC V2.0 validation by SARNET partners

    Chatelard, Patrick; Arndt, Siegfried; Atanasova, Boryana; Bandini, Giacomino; Bleyer, Alexandre; Brähler, Thimo; Buck, Michael; Kljenak, Ivo; Kujal, Bohumir

    2014-01-01

    Significant efforts are put into the assessment of the severe accident integral code ASTEC, jointly developed since several years by IRSN and GRS, either through comparison with results of the most important international experiments or through benchmarks with other severe accident simulation codes on plant applications. These efforts are done in first priority by the code developers’ organisations, IRSN and GRS, and also by numerous partners, in particular in the frame of the SARNET European network. The first version of the new series ASTEC V2 had been released in July 2009 to SARNET partners. Two subsequent V2.0 code revisions, including several modelling improvements, have been then released to the same partners, respectively in 2010 and 2011. This paper summarises first the approach of ASTEC validation vs. experiments, along with a description of the validation matrix, and presents then a few examples of applications of the ASTEC V2.0-rev1 version carried out in 2011 by the SARNET users. These calculation examples are selected in a way to cover diverse aspects of severe accident phenomenology, i.e. to cover both in-vessel and ex-vessel processes, in order to provide a good picture of the current ASTEC V2 capabilities. Finally, the main lessons drawn from this joint validation task are summarised, along with an evaluation of the current physical modelling relevance and thus an identification of the ASTEC V2.0 validity domain

  18. Adaptive Space–Time Coding Using ARQ

    Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2015-01-01

    We study the energy-limited outage probability of the block space-time coding (STC)-based systems utilizing automatic repeat request (ARQ) feedback and adaptive power allocation. Taking the ARQ feedback costs into account, we derive closed

  19. Analysis of ASTEC-Na capabilities for simulating a loss of flow CABRI experiment

    Flores y Flores, A.; Matuzas, V.; Perez-Martin, S.; Bandini, G.; Ederli, S.; Ammirabile, L.; Pfrang, W.

    2016-01-01

    Highlights: • ASTEC-Na results for CABRI BI1 test have been compared with experimental data. • The ASTEC-Na calculations reached the boiling onset within the error bar of the test. • The coolant axial profile in ASTEC-Na fit almost perfectly with the experimental data. • All the calculations have a good agreement with the two-phase front downwards. • All the calculations have a worse agreement with the two-phase front upwards. - Abstract: This paper presents simulation results of the CABRI BI1 test using the code ASTEC-Na, currently under development, as well as a comparison of the results with available experimental data. The EU-JASMIN project (7th FP of EURATOM) centres on the development and validation of the new severe accident analysis code ASTEC-Na (Accident Source Term Evaluation Code) for sodium-cooled fast reactors whose owner and developer is IRSN. A series of experiments performed in the past (CABRI/SCARABEE experiments) and new experiments to be conducted in the new experimental sodium facility KASOLA have been chosen to validate the developed ASTEC-Na code. One of the in-pile experiments considered for the validation of ASTEC-Na thermal–hydraulic models is the CABRI BI1 test, a pure loss-of-flow transient using a low burnup MOX fuel pin. The experiment resulted in a channel voiding as a result of the flow coast-down leading to clad melting. Only some fuel melting took place. Results from the analysis of this test using SIMMER and SAS-SFR codes are also presented in this work to check their suitability for further code benchmarking purposes.

  20. ASTEC participation in the international standard problem on KAEVER

    Spitz, P.; Van Dorsselaere, J.P.; Schwinges, B.; Schwarz, S.

    2001-01-01

    The objectives of the International Standard Problem no 44 was aerosol depletion behaviour under severe accident conditions in a LWR containment examined in the KAEVER test facility of Battelle (Germany). Nine organisations participated with 5 different codes in the ISP44, including a joint participation of GRS and IPSN with the integral code ASTEC (and in particular the CPA module) they have commonly developed. Five tests were selected from the KAEVER test matrix: K123, K148, K186 and K188 as open standard problems and the three-component test K187 as blind standard problem. All these tests were performed in supersaturated conditions and with slight fog formation, which are the most ambitious conditions for the coupled problem of thermal hydraulics and aerosol processes. The comparison between calculation and test showed a good agreement for all the tests with respect to the thermal-hydraulic conditions in the vessel, i.e. total pressure, atmosphere temperature, sump water and nitrogen mass, etc.... As for aerosol depletion, the ASTEC results were in a good overall agreement with the measured data. The code in particular predicted well the fast depletion of the hygroscopic and mixed aerosols and the slow depletion of insoluble silver aerosol. The important effects of bulk condensation, solubility and the Kelvin effect on the aerosol depletion were well predicted. However the code overestimation of steam condensation on hygroscopic aerosols in supersaturated conditions indicates that some slight improvements of the appropriate ASTEC models are needed in the future. In the final ISP44 workshop, the deviations of the ASTEC results with respect to the experiments were considered to be small compared to those of most other codes. (authors)

  1. ASTEC and MELCOR comparison for a VVER-1000 60 mm small break LOCA

    Georgieva, J.; Stefanova, A.; Groudev, P.; Tusheva, P.; Mladenov, I.; Dimov, D.; Passalacqua, R.

    2005-01-01

    In this paper a comparison between severe accident calculations performed for a WWER 1000 with the ASTEC1.1v0 and MELCOR 1.8.5 computer codes for a small break LOCA (ID 60 mm) without intervention of hydro accumulators is presented. This investigation has been performed in the framework of the SARNET project under the EURATOM 6th framework program. Once the accident sequence scenario is specified, both codes (MELCORE and ASTEC) are able to determine the core and containment damaged states, to estimate the release of radionuclides from the fuel as well as from the primary circuit and containment. Theses results are used to estimate the maximum period of the time during which the personnel could still take particular decisions in order to mitigate such an accident. The aim of the performed analysis is to estimate the discrepancy between ASTEC and MELCORE 1.8.5 calculations. Such discrepancies will be studied, if the case, proposal for ASTEC improvements will be made. Also the ASTEC capability to simulate specific reactor accident scenarios and/or particular safety systems will be tested. The final target is to propose severe accident management procedure for WWER 1000 reactors. In conclusions, the analysis for a small break LOCA (ID 60 mm without hydroelectricities) has shown some discrepancies between ASTEC and MELCORE especially during the degradation of the core. Further analyses are planed in which the MELCORE temperature 'set point' for core degradation (2520 K) will be progressively increased to approach the ASTEC one (which has been estimated to be about 3200 K). The comparison of the new results will allow a better evaluation of the in-vessel models implemented in ASTEC

  2. Context quantization by minimum adaptive code length

    Forchhammer, Søren; Wu, Xiaolin

    2007-01-01

    Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols....

  3. Development and validation of the ASTEC-Na thermal-hydraulic models

    Herranz, L. E.; Perez, S.; Bandini, G.; Jacq, F.; Parisi, C.; Berna, C.

    2014-07-01

    Last years the interest in sodium-cooled fast reactors (SFR) has been fostered worldwide by the search for higher nuclear energy sustainability. This has been reflected in the various international initiatives like GEN-IV International Forum, INPRO or ESNII platforms. At the same time, innovative nuclear reactor designs, particularly SFR, are aiming at even higher safety standards than current LWRs. A proof of it is the consideration of severe accidents since the earliest stages of reactor design. commonalities of LWR and SFR severe accident scenarios suggest that some of the knowledge achieved in the LWR arena might be applicable to some extent to SFRs. This is the spirit underneath of the EU-JASMIN project, which generic goal is developing the ASTEC-Na code from the LWR ASTEC platform. This will entail to t extend and adapt some existing models as well as to implement new ones in all the areas covered, from neutronics and pin thermo-mechanics and pin thermo-mechanics to the in-containment source term behavior by these, going through the indispensable Na thermal-hydraulics. (Author)

  4. Adaptive format conversion for scalable video coding

    Wan, Wade K.; Lim, Jae S.

    2001-12-01

    The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.

  5. Opportunistic Adaptive Transmission for Network Coding Using Nonbinary LDPC Codes

    Cocco Giuseppe

    2010-01-01

    Full Text Available Network coding allows to exploit spatial diversity naturally present in mobile wireless networks and can be seen as an example of cooperative communication at the link layer and above. Such promising technique needs to rely on a suitable physical layer in order to achieve its best performance. In this paper, we present an opportunistic packet scheduling method based on physical layer considerations. We extend channel adaptation proposed for the broadcast phase of asymmetric two-way bidirectional relaying to a generic number of sinks and apply it to a network context. The method consists of adapting the information rate for each receiving node according to its channel status and independently of the other nodes. In this way, a higher network throughput can be achieved at the expense of a slightly higher complexity at the transmitter. This configuration allows to perform rate adaptation while fully preserving the benefits of channel and network coding. We carry out an information theoretical analysis of such approach and of that typically used in network coding. Numerical results based on nonbinary LDPC codes confirm the effectiveness of our approach with respect to previously proposed opportunistic scheduling techniques.

  6. Analyses with ASTEC related to release of FPs and aerosol transport in case of SBLOCA For WWER 1000

    Atanasova, B.; Stefanova, A.; Groudev, P.

    2008-01-01

    The objective of this paper is to present the results obtained from performing the calculations with ASTEC computer code for the Source Term evaluation for specific severe accident transient. The calculations have been performed with the new version of ASTEC. The ASTEC 1.3 R2 code version is released by the French IRSN (Institut de Radioprotection at de surete nucleaire) by the end of 2007. The sequences include the release of fission products into the reactor containment and environment and transport of fission products. The analyses proposed here are performed to simulate radioactive products release through the cold leg of SG under accidental conditions. This investigation has been performed in the framework of the SARNET project (under the EURATOM 6th framework program) by the FoBAUs group (Forum of Bulgarian ASTEC users). (authors)

  7. Analysis and evaluation of the ASTEC model basis. Relevant experiments. Technical report

    Koppers, V.; Koch, M.K.

    2015-12-01

    The present report is a Technical Report within the research project ''ASMO'', funded by the German Federal Ministry of Economics and Technology (BMWi 1501433) and projected at the Reactor Simulation and Safety Group, Chair of Energy Systems and Energy Economics (LEE) at the Ruhr-Universitaet Bochum (RUB). The project deals with the analysis of the model basis of the Accident Source Term Evaluation Code (ASTEC). This report focuses on the containment part of ASTEC (CPA) and presents the simulation results of the experiment TH20.7. The experimental series TH20 was performed in the test vessel THAI (Thermal-hydraulics, Aerosols, Iodine) to investigate the erosion of a helium layer by a blower generated air jet. Helium is used as a substitute for hydrogen. In the experiment TH20.7 a light-gas layer is established and eroded by a momentum driven jet. The simulation of momentum driven jets is challenging for CPA because there is no model to simulate the kinetic momentum transfer. Subject of this report is the analysis of the capability of the code with the current model basis to model momentum driven phenomena. The jet is modelled using virtual ventilation systems, so called FAN-Systems. The FAN-Systems are adapted to the erosion velocity. The simulation results are compared to the experimental results and a basic calculation using FAN-Systems without any adjustments. For further improvement, different variation calculations are performed. At first, the vertical nodalization is refined. Subsequently, the resistance coefficients are adjusted to support the jet flow pattern and the number of the FAN-Systems is reduced. The analysis shows that the simulation of a momentum driven light-gas layer erosion is possible using adjusted FAN-Systems. A fine selected vertical nodalization and adaption of the resistance coefficients improves the simulation results.

  8. Adaptable recursive binary entropy coding technique

    Kiely, Aaron B.; Klimesh, Matthew A.

    2002-07-01

    We present a novel data compression technique, called recursive interleaved entropy coding, that is based on recursive interleaving of variable-to variable length binary source codes. A compression module implementing this technique has the same functionality as arithmetic coding and can be used as the engine in various data compression algorithms. The encoder compresses a bit sequence by recursively encoding groups of bits that have similar estimated statistics, ordering the output in a way that is suited to the decoder. As a result, the decoder has low complexity. The encoding process for our technique is adaptable in that each bit to be encoded has an associated probability-of-zero estimate that may depend on previously encoded bits; this adaptability allows more effective compression. Recursive interleaved entropy coding may have advantages over arithmetic coding, including most notably the admission of a simple and fast decoder. Much variation is possible in the choice of component codes and in the interleaving structure, yielding coder designs of varying complexity and compression efficiency; coder designs that achieve arbitrarily small redundancy can be produced. We discuss coder design and performance estimation methods. We present practical encoding and decoding algorithms, as well as measured performance results.

  9. Comparisons for ESTA-Task3: ASTEC, CESAM and CLÉS

    Christensen-Dalsgaard, J.

    The ESTA activity under the CoRoT project aims at testing the tools for computing stellar models and oscillation frequencies that will be used in the analysis of asteroseismic data from CoRoT and other large-scale upcoming asteroseismic projects. Here I report results of comparisons between calculations using the Aarhus code (ASTEC) and two other codes, for models that include diffusion and settling. It is found that there are likely deficiencies, requiring further study, in the ASTEC computation of models including convective cores.

  10. Fission product release from nuclear fuel II. Validation of ASTEC/ELSA on analytical and large scale experiments

    Brillant, G.; Marchetto, C.; Plumecocq, W.

    2013-01-01

    Highlights: • A wide range of experiments is presented for the ASTEC/ELSA code validation. • Analytical tests such as AECL, ORNL and VERCORS are considered. • A large-scale experiment, PHEBUS FPT1, is considered. • The good agreement with measurements shows the efficiency of the ASTEC modelling. • Improvements concern the FP release modelling from MOX and high burn-up UO 2 fuels. - Abstract: This article is the second of two articles dedicated to the mechanisms of fission product release from a degraded core. The models of fission product release from nuclear fuel in the ASTEC code have been described in detail in the first part of this work (Brillant et al., this issue). In this contribution, the validation of ELSA, the module of ASTEC that deals with fission product and structural material release from a degraded core, is presented. A large range of experimental tests, with various temperature and conditions for the fuel surrounding atmosphere (oxidising and reducing), is thus simulated with the ASTEC code. The validation database includes several analytical experiments with both bare fuel (e.g. MCE1 experiments) and cladded fuel (e.g. HCE3, VERCORS). Furthermore, the PHEBUS large-scale experiments are used for the validation of ASTEC. The rather satisfactory comparison between ELSA calculations and experimental measurements demonstrates the efficiency of the analytical models to describe fission product release in severe accident conditions

  11. Comparison of ASTEC 1.3 and ASTEC 1.3 R2 calculations in case of SBO for VVER-1000 reactor

    Atanasova, B.; Stefanova, A.; Grudev, P.

    2009-01-01

    The report presents the results from severe accident analyses performed with the both versions of ASTEC v1.3 and ASTEC v1.3R2 computer code for a VVER 1000 type of reactor. The purpose of this analysis is to assess the progress of ASTEC code modeling of main phenomena arising during hypothetical severe accidents. The final target of these analyses is to estimate the behaviour of the ASTEC code, its capability for simulation of severe accidents, including safety systems and Severe Accident Management (SAM) procedures. The analyses have been performed assuming a station blackout with simultaneous loss of HPIS, LPIS (ECCSs), EFWS and spray system due to failure of DGs. Hydro accumulators are not available. In the calculation it is assumed opening and stuck-open of PRZ relief valves. It has been organized the Fission Products path through the SEMPELL valve. It should be said that this investigation was limited to the 'in-vessel' phase of the sequence; therefore the effect of sprays on containment atmosphere has not been studied. (authors)

  12. Validation of ASTEC V2 models for the behaviour of corium in the vessel lower head

    Carénini, L.; Fleurot, J.; Fichot, F.

    2014-01-01

    The paper is devoted to the presentation of validation cases carried out for the models describing the corium behaviour in the “lower plenum” of the reactor vessel implemented in the V2.0 version of the ASTEC integral code, jointly developed by IRSN (France) and GRS (Germany). In the ASTEC architecture, these models are grouped within the single ICARE module and they are all activated in typical accident scenarios. Therefore, it is important to check the validity of each individual model, as long as experiments are available for which a single physical process is involved. The results of ASTEC applications against the following experiments are presented: FARO (corium jet fragmentation), LIVE (heat transfer between a molten pool and the vessel), MASCA (separation and stratification of corium non miscible phases) and OLHF (mechanical failure of the vessel). Compared to the previous ASTEC V1.3 version, the validation matrix is extended. This work allows determining recommended values for some model parameters (e.g. debris particle size in the fragmentation model and criterion for debris bed liquefaction). Almost all the processes governing the corium behaviour, its thermal interaction with the vessel wall and the vessel failure are modelled in ASTEC and these models have been assessed individually with satisfactory results. The main uncertainties appear to be related to the calculation of transient evolutions

  13. SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE

    Davies, C. B.

    1994-01-01

    SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is

  14. Main modelling features of the ASTEC V2.1 major version

    Chatelard, P.; Belon, S.; Bosland, L.; Carénini, L.; Coindreau, O.; Cousin, F.; Marchetto, C.; Nowack, H.; Piar, L.; Chailan, L.

    2016-01-01

    Highlights: • Recent modelling improvements of the ASTEC European severe accident code are outlined. • Key new physical models now available in the ASTEC V2.1 major version are described. • ASTEC progress towards a multi-design reactor code is illustrated for BWR and PHWR. • ASTEC strong link with the on-going EC CESAM FP7 project is emphasized. • Main remaining modelling issues (on which IRSN efforts are now directing) are given. - Abstract: A new major version of the European severe accident integral code ASTEC, developed by IRSN with some GRS support, was delivered in November 2015 to the ASTEC worldwide community. Main modelling features of this V2.1 version are summarised in this paper. In particular, the in-vessel coupling technique between the reactor coolant system thermal-hydraulics module and the core degradation module has been strongly re-engineered to remove some well-known weaknesses of the former V2.0 series. The V2.1 version also includes new core degradation models specifically addressing BWR and PHWR reactor types, as well as several other physical modelling improvements, notably on reflooding of severely damaged cores, Zircaloy oxidation under air atmosphere, corium coolability during corium concrete interaction and source term evaluation. Moreover, this V2.1 version constitutes the back-bone of the CESAM FP7 project, which final objective is to further improve ASTEC for use in Severe Accident Management analysis of the Gen.II–III nuclear power plants presently under operation or foreseen in near future in Europe. As part of this European project, IRSN efforts to continuously improve both code numerical robustness and computing performances at plant scale as well as users’ tools are being intensified. Besides, ASTEC will continue capitalising the whole knowledge on severe accidents phenomenology by progressively keeping physical models at the state of the art through a regular feed-back from the interpretation of the current and

  15. Assessment on 900–1300 MWe PWRs of the ASTEC-based simulation tool of SGTR thermal-hydraulics for the IRSN Emergency Technical Centre

    Foucher, L., E-mail: laurent.foucher@irsn.fr [Institut de Radioprotection et de Sûreté Nucléaire (IRSN), PSN-RES/SAG, Cadarache, Saint-Paul-lez-Durance 13115 (France); Cousin, F.; Fleurot, J. [Institut de Radioprotection et de Sûreté Nucléaire (IRSN), PSN-RES/SAG, Cadarache, Saint-Paul-lez-Durance 13115 (France); Brethes, S. [Institut de Radioprotection et de Sûreté Nucléaire (IRSN), PRP-CRI/SESUC, Cadarache, Saint-Paul-lez-Durance 13115 (France)

    2014-06-01

    In the event of an accident occurring in a nuclear power plant (NPP), being able to predict the amount of released radioactive substances in the environment is of prime importance. Depending on the severity of the accident, it can be necessary to quickly and efficiently protect the population and the surrounding environment from the associated radiological consequences. In France, the IRSN Emergency Technical Centre provides a technical support in decision making in case of a nuclear accident. The main objectives are to evaluate and predict the plant behaviour and radioactive releases during the accident. Different types of complementary tools are used: expert assessments, pre-calculated databases, simulation tools, etc. In the case of Steam Generator Tube Rupture (SGTR) accidents that may lead to significant radioactive releases to the atmosphere through the steam generator relief valves, IRSN is currently improving the simulation tools for diagnosis in crisis management. The objective is to adapt the thermal-hydraulic and FP behaviour modules of the severe accident integral code ASTEC V2.0, jointly developed by IRSN and its German counterpart GRS, to crisis management requirements. These requirements impose a fast running, highly reliable (accurate physical results), flexible and simple tool. This paper summarizes the results of the benchmarks between the ASTEC V2.0 thermal-hydraulic module and the CATHARE 2 (V2.5) French reference thermal-hydraulics code on several SGTR scenarios both for PWR 900 and 1300 MWe, with a particular emphasis on the computational time and physical models assessment. The overall agreement between both codes is good on the primary and secondary circuit thermal-hydraulic parameters. Moreover, the reliability and fast computational time of the thermal-hydraulic module of ASTEC V2.0 code appeared very satisfactory and in accordance with the requirements of an emergency tool.

  16. Assessment on 900–1300 MWe PWRs of the ASTEC-based simulation tool of SGTR thermal-hydraulics for the IRSN Emergency Technical Centre

    Foucher, L.; Cousin, F.; Fleurot, J.; Brethes, S.

    2014-01-01

    In the event of an accident occurring in a nuclear power plant (NPP), being able to predict the amount of released radioactive substances in the environment is of prime importance. Depending on the severity of the accident, it can be necessary to quickly and efficiently protect the population and the surrounding environment from the associated radiological consequences. In France, the IRSN Emergency Technical Centre provides a technical support in decision making in case of a nuclear accident. The main objectives are to evaluate and predict the plant behaviour and radioactive releases during the accident. Different types of complementary tools are used: expert assessments, pre-calculated databases, simulation tools, etc. In the case of Steam Generator Tube Rupture (SGTR) accidents that may lead to significant radioactive releases to the atmosphere through the steam generator relief valves, IRSN is currently improving the simulation tools for diagnosis in crisis management. The objective is to adapt the thermal-hydraulic and FP behaviour modules of the severe accident integral code ASTEC V2.0, jointly developed by IRSN and its German counterpart GRS, to crisis management requirements. These requirements impose a fast running, highly reliable (accurate physical results), flexible and simple tool. This paper summarizes the results of the benchmarks between the ASTEC V2.0 thermal-hydraulic module and the CATHARE 2 (V2.5) French reference thermal-hydraulics code on several SGTR scenarios both for PWR 900 and 1300 MWe, with a particular emphasis on the computational time and physical models assessment. The overall agreement between both codes is good on the primary and secondary circuit thermal-hydraulic parameters. Moreover, the reliability and fast computational time of the thermal-hydraulic module of ASTEC V2.0 code appeared very satisfactory and in accordance with the requirements of an emergency tool

  17. ASTEC-CATHARE2 benchmarks on French PWR 1300MWe reactors

    Tregoures, Nicolas; Philippot, Marc; Foucher, Laurent; Guillard, Gaetan; Fleurot, Joelle

    2009-01-01

    The French Institut de Radioprotection et de Surete Nucleaire (IRSN) is performing a level 2 Probabilistic Safety Assessment (PSA-2) on the French 1300 MWe reactors. This PSA-2 is heavily relying on the ASTEC integral computer code, jointly developed by IRSN and GRS (Germany). In order to assess the reliability and the quality of physical results of the ASTEC V1.3 code as well as the PWR 1300 MWe reference input deck, an important series of benchmarks with the French best-estimate thermal-hydraulic code CATHARE 2 V2.5 has been performed on 14 different severe accident scenarios. The present paper details 2 out of the 14 studied scenarios: a 12 inches cold leg Loss of Coolant Accident (LOCA) and a 2 tubes Steam Generator Tube Rupture (SGTR). The thermal-hydraulic behavior of the primary and secondary circuits is thoroughly investigated and the ASTEC results of the core degradation phase are presented. Overall, the thermal-hydraulic behavior given by the ASTEC V1.3 is in very good agreement with the CATHARE 2 V2.5 results. (author)

  18. ICAN Computer Code Adapted for Building Materials

    Murthy, Pappu L. N.

    1997-01-01

    The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.

  19. ASTEC: Controls analysis for personal computers

    Downing, John P.; Bauer, Frank H.; Thorpe, Christopher J.

    1989-01-01

    The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. The project is a follow-on to the INCA (INteractive Controls Analysis) program that has been developed at GSFC over the past five years. While ASTEC makes use of the algorithms and expertise developed for the INCA program, the user interface was redesigned to take advantage of the capabilities of the personal computer. The design philosophy and the current capabilities of the ASTEC software are described.

  20. Adaptive Space–Time Coding Using ARQ

    Makki, Behrooz

    2015-09-01

    We study the energy-limited outage probability of the block space-time coding (STC)-based systems utilizing automatic repeat request (ARQ) feedback and adaptive power allocation. Taking the ARQ feedback costs into account, we derive closed-form solutions for the energy-limited optimal power allocation and investigate the diversity gain of different STC-ARQ schemes. In addition, sufficient conditions are derived for the usefulness of ARQ in terms of energy-limited outage probability. The results show that, for a large range of feedback costs, the energy efficiency is substantially improved by the combination of ARQ and STC techniques if optimal power allocation is utilized. © 2014 IEEE.

  1. Intrinsic gain modulation and adaptive neural coding.

    Sungho Hong

    2008-07-01

    Full Text Available In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate versus current (f-I curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.

  2. Comparative analysis of a LOCA for a German PWR with ASTEC and ATHLET-CD

    Reinke, N.; Chan, H.W.; Sonnenkalb, M.

    2013-01-01

    This paper presents the results of a comparative analysis performed with ASTEC V2.02 and a coupled ATHLET-CD V2.2c /COCOSYS V2.4 calculation for a German 1300 MWe KONVOI type PWR. The purpose of this analysis is mainly to assess the ASTEC code behaviour in modelling of both the thermal-hydraulic phenomena in the coolant circuit arising during a hypothetical severe accident and the early phase of the core degradation versus the more mechanistic code system ATHLET-CD/COCOSYS. The performed analyses cover a loss of coolant accident sequence (LOCA). Such comparison has been done for the first time. The integral code ASTEC (Accident Source Term Evaluation Code) commonly developed since 1996 by IRSN and GRS is a fast running programme, which allows the calculation of entire sequences of severe accidents (SA) in light water reactors from the initiating event up to the release of fission products into the environment, thereby covering all important in-vessel and containment phenomena. The thermal-hydraulic mechanistic system code ATHLET (Analysis of THermal-hydraulics of LEaks and Transients) is being developed by GRS for the analysis of the whole spectrum of leaks and transients in PWRs and BWRs. For modeling of core degradation processes the CD part (Core Degradation) of ATHLET can be activated. For analyses of the containment behavior, ATHLET-CD has been coupled to the GRS code COCOSYS (COntainment COde SYStem). (orig.)

  3. Specific validation of COCOSYS and ASTEC and generic application. Final report; Gezielte Validierung von COCOSYS und ASTEC sowie generische Anwendungsrechnungen mit diesen Rechenprogrammen. Abschlussbericht

    Klein-Hessling, W.; Arndt, S.; Erdmann, W.; others, and

    2010-07-15

    In connection with the provision of tools for the assessment of incident and accident sequences and of accident management measures in nuclear power plants, the Federal Ministry of Economics and Technology (BMWi) sponsored in this project a further validation of the COCOSYS (Containment Code System) code system and the Franco- German ASTEC (Accident Source Term Evaluation Code) integral code. COCOSYS is being developed and validated for the comprehensive simulation of severe accidents in a light-water reactor (LWR) containment as well as analytical monitoring of experiments. The general objective is the simulation of all relevant processes and conditions in the containment during the process of a severe accident (including design basis accidents). This is to include also the consideration of all relevant interactions between the various phenomena. ASTEC is being jointly developed by IRSN and GRS with the aim to provide a fast running code for the calculation of the entire sequence of a severe accident in a light-water reactor, starting from the initiating event including the release of fission products into the environment. The code's fields of application are level-2 probabilistic safety analyses, the analysis of incident and accident sequences, uncertainty and sensitivity analyses as well as the analytical evaluation of experiments. The performed work within the COCOSYS project involves the validation of the new iodine module AIM-3 as well as the corresponding monitoring of iodine experiments inside the THAI facility. Main focus of these experiments was the interaction of iodine with steel and paint, radiolytic interactions and iodine ozone reaction. The extensions of fire simulations with COCOSYS regarding plume simulation and soot transport have been examined successfully on the basis of further experiments within the OECD PRISME project. A further main focus is the combined use and comparison of calculated results of COCOSYS, a lumped parameter code, and CFX

  4. Specific validation of COCOSYS and ASTEC and generic application. Final report; Gezielte Validierung von COCOSYS und ASTEC sowie generische Anwendungsrechnungen mit diesen Rechenprogrammen. Abschlussbericht

    Klein-Hessling, W.; Arndt, S.; Erdmann, W.; and others

    2010-07-15

    In connection with the provision of tools for the assessment of incident and accident sequences and of accident management measures in nuclear power plants, the Federal Ministry of Economics and Technology (BMWi) sponsored in this project a further validation of the COCOSYS (Containment Code System) code system and the Franco- German ASTEC (Accident Source Term Evaluation Code) integral code. COCOSYS is being developed and validated for the comprehensive simulation of severe accidents in a light-water reactor (LWR) containment as well as analytical monitoring of experiments. The general objective is the simulation of all relevant processes and conditions in the containment during the process of a severe accident (including design basis accidents). This is to include also the consideration of all relevant interactions between the various phenomena. ASTEC is being jointly developed by IRSN and GRS with the aim to provide a fast running code for the calculation of the entire sequence of a severe accident in a light-water reactor, starting from the initiating event including the release of fission products into the environment. The code's fields of application are level-2 probabilistic safety analyses, the analysis of incident and accident sequences, uncertainty and sensitivity analyses as well as the analytical evaluation of experiments. The performed work within the COCOSYS project involves the validation of the new iodine module AIM-3 as well as the corresponding monitoring of iodine experiments inside the THAI facility. Main focus of these experiments was the interaction of iodine with steel and paint, radiolytic interactions and iodine ozone reaction. The extensions of fire simulations with COCOSYS regarding plume simulation and soot transport have been examined successfully on the basis of further experiments within the OECD PRISME project. A further main focus is the combined use and comparison of calculated results of COCOSYS, a lumped parameter code, and

  5. A fast running method for predicting the efficiency of core melt spreading for application in ASTEC

    Spengler, C.

    2010-01-01

    The integral Accident Source Term Evaluation Code (ASTEC) is jointly developed by the French Institut de Radioprotection et de Surete Nucleaire (IRSN) and the German Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH to simulate the complete scenario of a hypothetical severe accident in a nuclear light water reactor, from the initial event until the possible radiological release of fission products out of the containment. In the frame of the new series of ASTEC V2 versions appropriate model extensions to the European Pressurised Water Reactor (EPR) are under development. With view to assessing with ASTEC the proper operation of the ex-vessel melt retention and coolability concept of the EPR with regard to melt spreading an approximation of the area finally covered by the corium and of the distance run by the corium front before freezing is required. A necessary capability of ASTEC is in a first step to identify such boundary cases, for which there is a potential that the melt will freeze before the spreading area is completely filled. This paper presents a fast running method for estimating the final extent of the area covered with melt on which a simplified criterion in ASTEC for detecting such boundary cases will be based. If a boundary case is detected the application of a more-detailed method might be necessary to assess further the consequences for the accident sequence. The major objective here is to provide a reliable method for estimating the final result of the spreading and not to provide highly detailed methods to simulate the dynamics of the transient process. (orig.)

  6. ASTEC V1.2.1 analysis of fission product transport in the primary system of a VVER-1000 type NPP during a severe accident

    Dienstbier, J.

    2006-06-01

    The SOPHAEROS module of the ASTEC V1.2.1 code was used. The results are compared to those obtained by using the MELCOR 1.8.5 code. One case was also calculated where instead of being provided by other ASTEC modules, the input data for the SOPHAEROS module are taken over from the MELCOR results. Marked differences were observed between the results of the two codes, which can be only partially explained in terms of the different assumptions made in them. The deposition profiles along the primary piping, however, are similar in the two codes

  7. An efficient adaptive arithmetic coding image compression technology

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  8. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  9. A multiobjective approach to the genetic code adaptability problem.

    de Oliveira, Lariza Laura; de Oliveira, Paulo S L; Tinós, Renato

    2015-02-19

    The organization of the canonical code has intrigued researches since it was first described. If we consider all codes mapping the 64 codes into 20 amino acids and one stop codon, there are more than 1.51×10(84) possible genetic codes. The main question related to the organization of the genetic code is why exactly the canonical code was selected among this huge number of possible genetic codes. Many researchers argue that the organization of the canonical code is a product of natural selection and that the code's robustness against mutations would support this hypothesis. In order to investigate the natural selection hypothesis, some researches employ optimization algorithms to identify regions of the genetic code space where best codes, according to a given evaluation function, can be found (engineering approach). The optimization process uses only one objective to evaluate the codes, generally based on the robustness for an amino acid property. Only one objective is also employed in the statistical approach for the comparison of the canonical code with random codes. We propose a multiobjective approach where two or more objectives are considered simultaneously to evaluate the genetic codes. In order to test our hypothesis that the multiobjective approach is useful for the analysis of the genetic code adaptability, we implemented a multiobjective optimization algorithm where two objectives are simultaneously optimized. Using as objectives the robustness against mutation with the amino acids properties polar requirement (objective 1) and robustness with respect to hydropathy index or molecular volume (objective 2), we found solutions closer to the canonical genetic code in terms of robustness, when compared with the results using only one objective reported by other authors. Using more objectives, more optimal solutions are obtained and, as a consequence, more information can be used to investigate the adaptability of the genetic code. The multiobjective approach

  10. Analysis of the THAI Iod-11 and Iod-12 tests: Advancements and limitations of ASTEC V2.0R3p1 and MELCOR V2.1.4803

    Gonfiotti, Bruno; Paci, Sandro

    2015-01-01

    Highlights: • The I 2 transport in a multi-compartment vessel was analysed. • ASTEC and MELCOR codes were employed. • Same nodalisation for the code-to-code comparison. • The I 2 concentrations were quite well simulated in ASTEC. • Numerical issues on MELCOR. - Abstract: This work is related to the application of the ASTEC V2.0R3p1 and MELCOR V2.1.4803 codes to the analysis of the THAI Iod-11 and Iod-12 containment tests characterised by an iodine release. The main scope of these two tests was to investigate the steel interaction on dry and wet surfaces, with an interaction supposed to be a two-steps process: an initial faster and reversible physisorption followed by a slower, and irreversible, chemisorption of the physisorbed I 2 . The aim of the present work is to highlight advancements and limitations of the current ASTEC and MELCOR code versions respect to the older code versions employed during the European SARNET projects. The investigation was carried out as a code-to-code comparison vs. the experimental THAI data, focusing on the evaluation of the code models treating the iodine behaviour. A similar spatial nodalisation was employed for both codes. As main result, ASTEC had shown an overall good agreement compared to the iodine related experimental data while, on contrary, MELCOR had shown poor results, probably due to unsolved numerical issues and unsatisfactory iodine modellisation

  11. Validation of ASTEC v2.0 corium jet fragmentation model using FARO experiments

    Hermsmeyer, S.; Pla, P.; Sangiorgi, M.

    2015-01-01

    Highlights: • Model validation base extended to six FARO experiments. • Focus on the calculation of the fragmented particle diameter. • Capability and limits of the ASTEC fragmentation model. • Sensitivity analysis of model outputs. - Abstract: ASTEC is an integral code for the prediction of Severe Accidents in Nuclear Power Plants. As such, it needs to cover all physical processes that could occur during accident progression, yet keeping its models simple enough for the ensemble to stay manageable and produce results within an acceptable time. The present paper is concerned with the validation of the Corium jet fragmentation model of ASTEC v2.0 rev3 by means of a selection of six experiments carried out within the FARO facility. The different conditions applied within these six experiments help to analyse the model behaviour in different situations and to expose model limits. In addition to comparing model outputs with experimental measurements, sensitivity analyses are applied to investigate the model. Results of the paper are (i) validation runs, accompanied by an identification of situations where the implemented fragmentation model does not match the experiments well, and discussion of results; (ii) its special attention to the models calculating the diameter of fragmented particles, the identification of a fault in one model implemented, and the discussion of simplification and ad hoc modification to improve the model fit; and, (iii) an investigation of the sensitivity of predictions towards inputs and parameters. In this way, the paper offers a thorough investigation of the merit and limitation of the fragmentation model used in ASTEC

  12. Adaptive Modulation and Coding for LTE Wireless Communication

    Hadi, S. S.; Tiong, T. C.

    2015-04-01

    Long Term Evolution (LTE) is the new upgrade path for carrier with both GSM/UMTS networks and CDMA2000 networks. The LTE is targeting to become the first global mobile phone standard regardless of the different LTE frequencies and bands use in other countries barrier. Adaptive Modulation and Coding (AMC) is used to increase the network capacity or downlink data rates. Various modulation types are discussed such as Quadrature Phase Shift Keying (QPSK), Quadrature Amplitude Modulation (QAM). Spatial multiplexing techniques for 4×4 MIMO antenna configuration is studied. With channel station information feedback from the mobile receiver to the base station transmitter, adaptive modulation and coding can be applied to adapt to the mobile wireless channels condition to increase spectral efficiencies without increasing bit error rate in noisy channels. In High-Speed Downlink Packet Access (HSDPA) in Universal Mobile Telecommunications System (UMTS), AMC can be used to choose modulation types and forward error correction (FEC) coding rate.

  13. Satellite Media Broadcasting with Adaptive Coding and Modulation

    Georgios Gardikis

    2009-01-01

    Full Text Available Adaptive Coding and Modulation (ACM is a feature incorporated into the DVB-S2 satellite specification, allowing real-time adaptation of transmission parameters according to the link conditions. Although ACM was originally designed for optimizing unicast services, this article discusses the expansion of its usage to broadcasting streams as well. For this purpose, a general cross-layer adaptation approach is proposed, along with its realization into a fully functional experimental network, and test results are presented. Finally, two case studies are analysed, assessing the gain derived by ACM in a real large-scale deployment, involving HD services provision to two different geographical areas.

  14. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  15. Adaptive discrete cosine transform coding algorithm for digital mammography

    Baskurt, Atilla M.; Magnin, Isabelle E.; Goutte, Robert

    1992-09-01

    The need for storage, transmission, and archiving of medical images has led researchers to develop adaptive and efficient data compression techniques. Among medical images, x-ray radiographs of the breast are especially difficult to process because of their particularly low contrast and very fine structures. A block adaptive coding algorithm based on the discrete cosine transform to compress digitized mammograms is described. A homogeneous repartition of the degradation in the decoded images is obtained using a spatially adaptive threshold. This threshold depends on the coding error associated with each block of the image. The proposed method is tested on a limited number of pathological mammograms including opacities and microcalcifications. A comparative visual analysis is performed between the original and the decoded images. Finally, it is shown that data compression with rather high compression rates (11 to 26) is possible in the mammography field.

  16. Implementation and testing of the CFDS-FLOW3D code

    Smith, B.L.

    1994-03-01

    FLOW3D is a multi-purpose, transient fluid dynamics and heat transfer code developed by Computational Fluid Dynamics Services (CFDS), a branch of AEA Technology, based at Harwell. The code is supplied with a SUN-based operating environment consisting of an interactive grid generator SOPHIA and a post-processor JASPER for graphical display of results. Both SOPHIA and JASPER are extensions of the support software originally written for the ASTEC code, also promoted by CFDS. The latest release of FLOW3D contains well-tested turbulence and combustion models and, in a less-developed form, a multi-phase modelling potential. This document describes briefly the modelling capabilities of FLOW3D (Release 3.2) and outlines implementation procedures for the VAX, CRAY and CONVEX computer systems. Additional remarks are made concerning the in-house support programs which have been specially written in order to adapt existing ASTEC input data for use with FLOW3D; these programs operate within a VAX-VMS environment. Three sample calculations have been performed and results compared with those obtained previously using the ASTEC code, and checked against other available data, where appropriate. (author) 35 figs., 3 tabs., 42 refs

  17. Adaption of the PARCS Code for Core Design Audit Analyses

    Kim, Hyong Chol; Lee, Young Jin; Uhm, Jae Beop; Kim, Hyunjik [Nuclear Safety Evaluation, Daejeon (Korea, Republic of); Jeong, Hun Young; Ahn, Seunghoon; Woo, Swengwoong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2013-05-15

    The eigenvalue calculation also includes quasi-static core depletion analyses. PARCS has implemented variety of features and has been qualified as a regulatory audit code in conjunction with other NRC thermal-hydraulic codes such as TRACE or RELAP5. In this study, as an adaptation effort for audit applications, PARCS is applied for an audit analysis of a reload core design. The lattice physics code HELIOS is used for cross section generation. PARCS-HELIOS code system has been established as a core analysis tool. Calculation results have been compared on a wide spectrum of calculations such as power distribution, critical soluble boron concentration, and rod worth. A reasonable agreement between the audit calculation and the reference results has been found.

  18. Adaptive Wavelet Coding Applied in a Wireless Control System.

    Gama, Felipe O S; Silveira, Luiz F Q; Salazar, Andrés O

    2017-12-13

    Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER) versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  19. Adaptive Wavelet Coding Applied in a Wireless Control System

    Felipe O. S. Gama

    2017-12-01

    Full Text Available Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  20. Adaptation of radiation shielding code to space environment

    Okuno, Koichi; Hara, Akihisa

    1992-01-01

    Recently, the trend to the development of space has heightened. To the development of space, many problems are related, and as one of them, there is the protection from cosmic ray. The cosmic ray is the radiation having ultrahigh energy, and there was not the radiation shielding design code that copes with cosmic ray so far. Therefore, the high energy radiation shielding design code for accelerators was improved so as to cope with the peculiarity that cosmic ray possesses. Moreover, the calculation of the radiation dose equivalent rate in the moon base to which the countermeasures against cosmic ray were taken was simulated by using the improved code. As the important countermeasures for the safety protection from radiation, the covering with regolith is carried out, and the effect of regolith was confirmed by using the improved code. Galactic cosmic ray, solar flare particles, radiation belt, the adaptation of the radiation shielding code HERMES to space environment, the improvement of the three-dimensional hadron cascade code HETCKFA-2 and the electromagnetic cascade code EGS 4-KFA, and the cosmic ray simulation are reported. (K.I.)

  1. The FORTRAN NALAP code adapted to a microcomputer compiler

    Lobo, Paulo David de Castro; Borges, Eduardo Madeira; Braz Filho, Francisco Antonio; Guimaraes, Lamartine Nogueira Frutuoso

    2010-01-01

    The Nuclear Energy Division of the Institute for Advanced Studies (IEAv) is conducting the TERRA project (TEcnologia de Reatores Rapidos Avancados), Technology for Advanced Fast Reactors project, aimed at a space reactor application. In this work, to attend the TERRA project, the NALAP code adapted to a microcomputer compiler called Compaq Visual Fortran (Version 6.6) is presented. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic responses for sodium cooled fast reactors. The strategy to run the code in a PC was divided in some steps mainly to remove unnecessary routines, to eliminate old statements, to introduce new ones and also to include extension precision mode. The source program was able to solve three sample cases under conditions of protected transients suggested in literature: the normal reactor shutdown, with a delay of 200 ms to start the control rod movement and a delay of 500 ms to stop the pumps; reactor scram after transient of loss of flow; and transients protected from overpower. Comparisons were made with results from the time when the NALAP code was acquired by the IEAv, back in the 80's. All the responses for these three simulations reproduced the calculations performed with the CDC compiler in 1985. Further modifications will include the usage of gas as coolant for the nuclear reactor to allow a Closed Brayton Cycle Loop - CBCL - to be used as a heat/electric converter. (author)

  2. The FORTRAN NALAP code adapted to a microcomputer compiler

    Lobo, Paulo David de Castro; Borges, Eduardo Madeira; Braz Filho, Francisco Antonio; Guimaraes, Lamartine Nogueira Frutuoso, E-mail: plobo.a@uol.com.b, E-mail: eduardo@ieav.cta.b, E-mail: fbraz@ieav.cta.b, E-mail: guimarae@ieav.cta.b [Instituto de Estudos Avancados (IEAv/CTA), Sao Jose dos Campos, SP (Brazil)

    2010-07-01

    The Nuclear Energy Division of the Institute for Advanced Studies (IEAv) is conducting the TERRA project (TEcnologia de Reatores Rapidos Avancados), Technology for Advanced Fast Reactors project, aimed at a space reactor application. In this work, to attend the TERRA project, the NALAP code adapted to a microcomputer compiler called Compaq Visual Fortran (Version 6.6) is presented. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic responses for sodium cooled fast reactors. The strategy to run the code in a PC was divided in some steps mainly to remove unnecessary routines, to eliminate old statements, to introduce new ones and also to include extension precision mode. The source program was able to solve three sample cases under conditions of protected transients suggested in literature: the normal reactor shutdown, with a delay of 200 ms to start the control rod movement and a delay of 500 ms to stop the pumps; reactor scram after transient of loss of flow; and transients protected from overpower. Comparisons were made with results from the time when the NALAP code was acquired by the IEAv, back in the 80's. All the responses for these three simulations reproduced the calculations performed with the CDC compiler in 1985. Further modifications will include the usage of gas as coolant for the nuclear reactor to allow a Closed Brayton Cycle Loop - CBCL - to be used as a heat/electric converter. (author)

  3. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  4. Nyx: Adaptive mesh, massively-parallel, cosmological simulation code

    Almgren, Ann; Beckner, Vince; Friesen, Brian; Lukic, Zarija; Zhang, Weiqun

    2017-12-01

    Nyx code solves equations of compressible hydrodynamics on an adaptive grid hierarchy coupled with an N-body treatment of dark matter. The gas dynamics in Nyx use a finite volume methodology on an adaptive set of 3-D Eulerian grids; dark matter is represented as discrete particles moving under the influence of gravity. Particles are evolved via a particle-mesh method, using Cloud-in-Cell deposition/interpolation scheme. Both baryonic and dark matter contribute to the gravitational field. In addition, Nyx includes physics for accurately modeling the intergalactic medium; in optically thin limits and assuming ionization equilibrium, the code calculates heating and cooling processes of the primordial-composition gas in an ionizing ultraviolet background radiation field.

  5. Least-Square Prediction for Backward Adaptive Video Coding

    Li Xin

    2006-01-01

    Full Text Available Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP, and demonstrate its potential in video coding. Motivated by the duality between edge contour in images and motion trajectory in video, we propose to derive the best prediction of the current frame from its causal past using least-square method. It is demonstrated that LSP is particularly effective for modeling video material with slow motion and can be extended to handle fast motion by temporal warping and forward adaptation. For typical QCIF test sequences, LSP often achieves smaller MSE than , full-search, quarter-pel block matching algorithm (BMA without the need of transmitting any overhead.

  6. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication

    Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao

    2018-02-01

    A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.

  7. CoRoT/ESTA TASK 1 and TASK 3 comparison of the internal structure and seismic properties of representative stellar models. Comparisons between the ASTEC, CESAM, CLES, GARSTEC and STAROX codes

    Lebreton, Yveline; Montalbán, Josefina; Christensen-Dalsgaard, Jørgen; Roxburgh, Ian W.; Weiss, Achim

    2008-08-01

    We compare stellar models produced by different stellar evolution codes for the CoRoT/ESTA project, comparing their global quantities, their physical structure, and their oscillation properties. We discuss the differences between models and identify the underlying reasons for these differences. The stellar models are representative of potential CoRoT targets. Overall we find very good agreement between the five different codes, but with some significant deviations. We find noticeable discrepancies (though still at the per cent level) that result from the handling of the equation of state, of the opacities and of the convective boundaries. The results of our work will be helpful in interpreting future asteroseismology results from CoRoT.

  8. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  9. Facial expression coding in children and adolescents with autism: Reduced adaptability but intact norm-based coding.

    Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise

    2018-05-01

    Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.

  10. An Adaptive Motion Estimation Scheme for Video Coding

    Pengyu Liu

    2014-01-01

    Full Text Available The unsymmetrical-cross multihexagon-grid search (UMHexagonS is one of the best fast Motion Estimation (ME algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised.

  11. PROGRAM ASTEC (ADVANCED SOLAR TURBO ELECTRIC CONCEPT). PART 1. CANDIDATE MATERIALS LABORATORY TESTS

    A space power system of the type envisioned by the ASTEC program requires the development of a lightweight solar collector of high reflectance...capable of withstanding the space environment for an extended period. A survey of the environment of interest for ASTEC purposes revealed 4 potential...developed by the solar-collector industry for use in the ASTEC program, and to test the effects of space environment on these materials. Of 6 material

  12. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  13. Adaptive distributed video coding with correlation estimation using expectation propagation

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  14. Control code for laboratory adaptive optics teaching system

    Jin, Moonseob; Luder, Ryan; Sanchez, Lucas; Hart, Michael

    2017-09-01

    By sensing and compensating wavefront aberration, adaptive optics (AO) systems have proven themselves crucial in large astronomical telescopes, retinal imaging, and holographic coherent imaging. Commercial AO systems for laboratory use are now available in the market. One such is the ThorLabs AO kit built around a Boston Micromachines deformable mirror. However, there are limitations in applying these systems to research and pedagogical projects since the software is written with limited flexibility. In this paper, we describe a MATLAB-based software suite to interface with the ThorLabs AO kit by using the MATLAB Engine API and Visual Studio. The software is designed to offer complete access to the wavefront sensor data, through the various levels of processing, to the command signals to the deformable mirror and fast steering mirror. In this way, through a MATLAB GUI, an operator can experiment with every aspect of the AO system's functioning. This is particularly valuable for tests of new control algorithms as well as to support student engagement in an academic environment. We plan to make the code freely available to the community.

  15. Adaptation of HAMMER computer code to CYBER 170/750 computer

    Pinheiro, A.M.B.S.; Nair, R.P.K.

    1982-01-01

    The adaptation of HAMMER computer code to CYBER 170/750 computer is presented. The HAMMER code calculates cell parameters by multigroup transport theory and reactor parameters by few group diffusion theory. The auxiliary programs, the carried out modifications and the use of HAMMER system adapted to CYBER 170/750 computer are described. (M.C.K.) [pt

  16. Motion-adaptive intraframe transform coding of video signals

    With, de P.H.N.

    1989-01-01

    Spatial transform coding has been widely applied for image compression because of its high coding efficiency. However, in many intraframe systems, in which every TV frame is independently processed, coding of moving objects in the case of interlaced input signals is not addressed. In this paper, we

  17. Context adaptive coding of bi-level images

    Forchhammer, Søren

    2008-01-01

    With the advent of sequential arithmetic coding, the focus of highly efficient lossless data compression is placed on modelling the data. Rissanen's Algorithm Context provided an elegant solution to universal coding with optimal convergence rate. Context based arithmetic coding laid the grounds f...

  18. An Adaptive Coding Scheme For Effective Bandwidth And Power ...

    Codes for communication channels are in most cases chosen on the basis of the signal to noise ratio expected on a given transmission channel. The worst possible noise condition is normally assumed in the choice of appropriate codes such that a specified minimum error shall result during transmission on the channel.

  19. Adaptable Value-Set Analysis for Low-Level Code

    Brauer, Jörg; Hansen, René Rydhof; Kowalewski, Stefan; Larsen, Kim G.; Olesen, Mads Chr.

    2012-01-01

    This paper presents a framework for binary code analysis that uses only SAT-based algorithms. Within the framework, incremental SAT solving is used to perform a form of weakly relational value-set analysis in a novel way, connecting the expressiveness of the value sets to computational complexity. Another key feature of our framework is that it translates the semantics of binary code into an intermediate representation. This allows for a straightforward translation of the program semantics in...

  20. Variable Rate, Adaptive Transform Tree Coding Of Images

    Pearlman, William A.

    1988-10-01

    A tree code, asymptotically optimal for stationary Gaussian sources and squared error distortion [2], is used to encode transforms of image sub-blocks. The variance spectrum of each sub-block is estimated and specified uniquely by a set of one-dimensional auto-regressive parameters. The expected distortion is set to a constant for each block and the rate is allowed to vary to meet the given level of distortion. Since the spectrum and rate are different for every block, the code tree differs for every block. Coding simulations for target block distortion of 15 and average block rate of 0.99 bits per pel (bpp) show that very good results can be obtained at high search intensities at the expense of high computational complexity. The results at the higher search intensities outperform a parallel simulation with quantization replacing tree coding. Comparative coding simulations also show that the reproduced image with variable block rate and average rate of 0.99 bpp has 2.5 dB less distortion than a similarly reproduced image with a constant block rate equal to 1.0 bpp.

  1. Adaptive Relay Activation in the Network Coding Protocols

    Pahlevani, Peyman; Roetter, Daniel Enrique Lucani; Fitzek, Frank

    2015-01-01

    State-of-the-art Network coding based routing protocols exploit the link quality information to compute the transmission rate in the intermediate nodes. However, the link quality discovery protocols are usually inaccurate, and introduce overhead in wireless mesh networks. In this paper, we presen...

  2. Adaptive antenna array algorithms and their impact on code division ...

    In this paper four each blind adaptive array algorithms are developed, and their performance under different test situations (e.g. A WGN (Additive White Gaussian Noise) channel, and multipath environment) is studied A MATLAB test bed is created to show their performance on these two test situations and an optimum one ...

  3. Assessment of ASTEC-CPA pool scrubbing models against POSEIDON-II and SGTR-ARTIST data

    Herranz, Luis E.; Fontanet, Joan

    2009-01-01

    Aerosol scrubbing in pools mitigates the potential source term in key severe accident scenarios in PWRs and BWRs. Even though models were extensively validated in the past, a thorough and systematic validation under key challenging conditions is missing. Some of those conditions are high injection velocity, high pool temperature and/or presence of submerged structures. In particular, in-code models have been neither updated nor validated based on the most recent experimental data. The POSEIDON-II and the SGTR-ARTIST projects produced sets of data under conditions of utmost interest for pool scrubbing validation: high temperature and submerged structures. This paper investigates the response of models encapsulated in the CPA module of the ASTEC code in the simulation of those experimental set-ups. The influence of key pool scrubbing variables like steam fraction, water depth, gas flow-rate and particle size has been analyzed. Additionally, comparisons to stand-alone code (i.e., SPARC90) responses have also been obtained, so that prediction-to-data deviations can be discussed and attributed to either model grounds and/or model implementation in integral accident codes. This work has demonstrated that ASTEC-CPA limitations to capture fundamental trends of aerosol pool scrubbing are substantial (although the SGTR scenarios should not be properly considered within the CPA scope) and they stem from both original models (i.e., SPARC90) and model implementation. This work has been carried out within the European SARNET project of the VI Framework Program of EURATOM. (author)

  4. Multiplexed Spike Coding and Adaptation in the Thalamus

    Rebecca A. Mease

    2017-05-01

    Full Text Available High-frequency “burst” clusters of spikes are a generic output pattern of many neurons. While bursting is a ubiquitous computational feature of different nervous systems across animal species, the encoding of synaptic inputs by bursts is not well understood. We find that bursting neurons in the rodent thalamus employ “multiplexing” to differentially encode low- and high-frequency stimulus features associated with either T-type calcium “low-threshold” or fast sodium spiking events, respectively, and these events adapt differently. Thus, thalamic bursts encode disparate information in three channels: (1 burst size, (2 burst onset time, and (3 precise spike timing within bursts. Strikingly, this latter “intraburst” encoding channel shows millisecond-level feature selectivity and adapts across statistical contexts to maintain stable information encoded per spike. Consequently, calcium events both encode low-frequency stimuli and, in parallel, gate a transient window for high-frequency, adaptive stimulus encoding by sodium spike timing, allowing bursts to efficiently convey fine-scale temporal information.

  5. Use of sensitivity-information for the adaptive simulation of thermo-hydraulic system codes

    Kerner, Alexander M.

    2011-01-01

    Within the scope of this thesis the development of methods for online-adaptation of dynamical plant simulations of a thermal-hydraulic system code to measurement data is depicted. The described approaches are mainly based on the use of sensitivity-information in different areas: statistical sensitivity measures are used for the identification of the parameters to be adapted and online-sensitivities for the parameter adjustment itself. For the parameter adjustment the method of a ''system-adapted heuristic adaptation with partial separation'' (SAHAT) was developed, which combines certain variants of parameter estimation and control with supporting procedures to solve the basic problems. The applicability of the methods is shown by adaptive simulations of a PKL-III experiment and by selected transients in a nuclear power plant. Finally the main perspectives for the application of a tracking simulator on a system code are identified.

  6. Adaptation of GRS calculation codes for Soviet reactors

    Langenbuch, S.; Petri, A.; Steinborn, J.; Stenbok, I.A.; Suslow, A.I.

    1994-01-01

    The use of ATHLET for incident calculation of WWER has been tested and verified in numerous calculations. Further adaptation may be needed for the WWER 1000 plants. Coupling ATHLET with the 3D nuclear model BIPR-8 for WWER cores clearly improves studies of the influence of neutron kinetics. In the case of FBMK reactors ATHLET calculations show that typical incidents in the complex RMBK reactors can be calculated even though verification still has to be worked on. Results of the 3D-core model QUABOX/CUBBOX-HYCA show good correlation of calculated and measured values in reactor plants. Calculations carried out to date were used to check essential parameters influencing RBMK core behaviour especially dependence of effective voidre activity on the number of control rods. (orig./HP) [de

  7. Fresenius AS.TEC204 blood cell separator.

    Sugai, Mikiya

    2003-02-01

    Fresenius AS.TEC204 is a third-generation blood cell separator that incorporates the continuous centrifugal separation method and automatic control of the cell separation process. Continuous centrifugation separates cell components according to their specific gravity, and different cell components are either harvested or eliminated as needed. The interface between the red blood cell and plasma is optically detected, and the Interface Control (IFC) cooperates with different pumps, monitors and detectors to harvest required components automatically. The system is composed of three major sections; the Front Panel Unit; the Pump Unit, and the Centrifuge Unit. This unit can be used for a wide variety of clinical applications including collection of platelets, peripheral blood stem cells, bone marrow stem cells, granulocytes, mononuclear cells, and exchange of plasma or red cells, and for plasma treatment.

  8. Italian Adaptation of the "Autonomy and Relatedness Coding System"

    Sonia Ingoglia

    2013-08-01

    Full Text Available The study examined the applicability of the observational technique developed by Allen and colleagues (Allen, Hauser, Bell, & O’Connor, 1994; Allen, Hauser, et al., 2003 to investigate the issues of autonomy and relatedness in parent-adolescent relationship in the Italian context. Thirty-five mother-adolescent dyads participated to a task in which they discussed a family issue about which they disagree. Adolescents were also administered a self-report measure assessing their relationship with mothers. Mothers reported significantly higher levels of promoting and inhibiting autonomy, and promoting relatedness behaviors than their children. Results also suggested a partial behavioral reciprocity within the dyads, regarding promoting and inhibiting relatedness, and inhibiting autonomy. Finally, mothers’ inhibiting autonomy behaviors positively correlated to teens’ perception of their relationship as conflicting; adolescents’ inhibiting and promoting autonomy and inhibiting relatedness behaviors positively correlated to open confrontation, rejection and coolness, while promoting relatedness behaviors negatively correlated to open confrontation, rejection and coolness. The results suggest that, for Italian mothers, behaviors linked to autonomy seem to be associated with being involved in a more negative relationship with their children, even if not characterized by open hostility, while for Italian adolescents, behaviors linked to autonomy seem to be associated with threatening the closeness of the relationship. Globally, the findings suggest that the application of this observational procedure may help our understanding of youth autonomy and relatedness development in Italy, but they leave unanswered questions regarding its appropriate adaptation and the role played by cultural differences.

  9. Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures

    Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314

  10. Adaptive variable-length coding for efficient compression of spacecraft television data.

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  11. Performance optimization of PM-16QAM transmission system enabled by real-time self-adaptive coding.

    Qu, Zhen; Li, Yao; Mo, Weiyang; Yang, Mingwei; Zhu, Shengxiang; Kilper, Daniel C; Djordjevic, Ivan B

    2017-10-15

    We experimentally demonstrate self-adaptive coded 5×100  Gb/s WDM polarization multiplexed 16 quadrature amplitude modulation transmission over a 100 km fiber link, which is enabled by a real-time control plane. The real-time optical signal-to-noise ratio (OSNR) is measured using an optical performance monitoring device. The OSNR measurement is processed and fed back using control plane logic and messaging to the transmitter side for code adaptation, where the binary data are adaptively encoded with three types of low-density parity-check (LDPC) codes with code rates of 0.8, 0.75, and 0.7 of large girth. The total code-adaptation latency is measured to be 2273 ms. Compared with transmission without adaptation, average net capacity improvements of 102%, 36%, and 7.5% are obtained, respectively, by adaptive LDPC coding.

  12. Reliable channel-adapted error correction: Bacon-Shor code recovery from amplitude damping

    Á. Piedrafita (Álvaro); J.M. Renes (Joseph)

    2017-01-01

    textabstractWe construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve

  13. Supporting Dynamic Adaptive Streaming over HTTP in Wireless Meshed Networks using Random Linear Network Coding

    Hundebøll, Martin; Pedersen, Morten Videbæk; Roetter, Daniel Enrique Lucani

    2014-01-01

    This work studies the potential and impact of the FRANC network coding protocol for delivering high quality Dynamic Adaptive Streaming over HTTP (DASH) in wireless networks. Although DASH aims to tailor the video quality rate based on the available throughput to the destination, it relies...

  14. Adaptive under relaxation factor of MATRA code for the efficient whole core analysis

    Kwon, Hyuk; Kim, S. J.; Seo, K. W.; Hwang, D. H.

    2013-01-01

    Such nonlinearities are handled in MATRA code using outer iteration with Picard scheme. The Picard scheme involves successive updating of the coefficient matrix based on the previously calculated values. The scheme is a simple and effective method for the nonlinear problem but the effectiveness greatly depends on the under-relaxing capability. Accuracy and speed of calculation are very sensitively dependent on the under-relaxation factor in outer-iteration updating the axial mass flow using the continuity equation. The under-relaxation factor in MATRA is generally utilized with a fixed value that is empirically determined. Adapting the under-relaxation factor to the outer iteration is expected to improve the calculation effectiveness of MATRA code rather than calculation with the fixed under-relaxation factor. The present study describes the implementation of adaptive under-relaxation within the subchannel code MATRA. Picard iterations with adaptive under-relaxation can accelerate the convergence for mass conservation in subchannel code MATRA. The most efficient approach for adaptive under relaxation appears to be very problem dependent

  15. Adaptive bit plane quadtree-based block truncation coding for image compression

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  16. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    Razzaque, Mohammad Abdur; Javadi, Saeideh S.; Coulibaly, Yahaya; Hira, Muta Tah

    2015-01-01

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485

  17. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    Mohammad Abdur Razzaque

    2014-12-01

    Full Text Available Wireless body sensor networks (WBSNs for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS, in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network’s QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.

  18. QOS-aware error recovery in wireless body sensor networks using adaptive network coding.

    Razzaque, Mohammad Abdur; Javadi, Saeideh S; Coulibaly, Yahaya; Hira, Muta Tah

    2014-12-29

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.

  19. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  20. ASTEC and MODEL: Controls software development at Goddard Space Flight Center

    Downing, John P.; Bauer, Frank H.; Surber, Jeffrey L.

    1993-01-01

    The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at the Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. In the last three years the ASTEC (Analysis and Simulation Tools for Engineering Controls) software has been under development. ASTEC is meant to be an integrated collection of controls analysis tools for use at the desktop level. MODEL (Multi-Optimal Differential Equation Language) is a translator that converts programs written in the MODEL language to FORTRAN. An upgraded version of the MODEL program will be merged into ASTEC. MODEL has not been modified since 1981 and has not kept with changes in computers or user interface techniques. This paper describes the changes made to MODEL in order to make it useful in the 90's and how it relates to ASTEC.

  1. Further analysis of the FRONT model in ASTEC by simulating the hydrogen deflagration experiment BMC Ix9

    Braehler, Thimo; Koch, Marco K.

    2011-01-01

    Effects of possible hydrogen deflagration like pressure built up and temperature increase can become important for the evaluation of late phases in loss of coolant accidents. In this compact the simulation of the hydrogen deflagration test BMC Ix9 with the FRONT model of the integral lumped-parameter-code ASTEC is treated. This model is available since mid of 2009, released with ASTEC V2.0. To check the validity of the model related to the applicability on different phenomena, a large number of simulations are necessary. The model was used by RUB in the frame of the 'International Standard Problem on Hydrogen Combustion (ISP-49)' and within the EC NoE SARNET2. It has been concluded that the model is able to simulate a broad range of hydrogen deflagration phenomena under different experimental conditions. Experiments analysed in the mentioned benchmarks are characterised by flame propagation in vertical direction. Moreover there were no considerations of flame propagation in multi compartment geometries. In the BMC Ix9 test horizontal hydrogen deflagration with flame propagation in 3 rooms was investigated. The FRONT model was already validated on the BMC Hx23 experiment with sufficient results. In comparison to this test the number of compartments and the initial gas composition, like hydrogen and steam concentration differs from the BMC Ix9 experiment. Previous investigations of RUB showed that the modelling of turbulence related to the transport between different compartment and the determination of this quantity has a strong influence on the simulation results. In the following the FRONT model is described briefly, the simulation results are discussed and a first recommendation for the nodalisation is given. (orig.)

  2. NATO-ASTEC-matrix-research environment, information sharing and MCA

    Apikyan, S.; Yerznkanyan, K.; Diamond, D.; Vardanyan, M.; Sevikyan, G.

    2010-01-01

    The successful implementations of the NATO-ASTECMATRIX project in Armenia are essential contribution into security, stability and solidarity among regional nations, by applying the best technical expertise to problem solving. Collaboration, networking and capacity-building are means used to accomplish these goals. A further aim is to promote the co-operation with new partners and the ASTEC are creating links between scientists and organizations in formerly separated communities, developing new strategy concentrating support on security related collaborative projects and finding answers to critical questions and a way of connecting nations. The NATO-ASTECMATRIX within Armenia leads to a network of high standards laboratories that will drastically improve the overview and the technical infrastructure for monitoring, accounting and control of CBRN materials in the Armenia. This new infrastructure will enhance the exchange of information on this vital issue via the IRIS. In follow-up phases, it will also help to better define the needs and requirements for a policy to enhance legal tools for the management of these materials, and for the creation of one or several agencies aiming at dealing with wastes or no longer useful materials containing CBRN components in Armenia

  3. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-01-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096 3 effective resolution and 16 GPUs with 8192 3 effective resolution, respectively.

  4. Overall simulation of a HTGR plant with the gas adapted MANTA code

    Emmanuel Jouet; Dominique Petit; Robert Martin

    2005-01-01

    Full text of publication follows: AREVA's subsidiary Framatome ANP is developing a Very High Temperature Reactor nuclear heat source that can be used for electricity generation as well as cogeneration including hydrogen production. The selected product has an indirect cycle architecture which is easily adapted to all possible uses of the nuclear heat source. The coupling to the applications is implemented through an Intermediate Heat exchanger. The system code chosen to calculate the steady-state and transient behaviour of the plant is based on the MANTA code. The flexible and modular MANTA code that is originally a system code for all non LOCA PWR plant transients, has been the subject of new developments to simulate all the forced convection transients of a nuclear plant with a gas cooled High Temperature Reactor including specific core thermal hydraulics and neutronics modelizations, gas and water steam turbomachinery and control structure. The gas adapted MANTA code version is now able to model a total HTGR plant with a direct Brayton cycle as well as indirect cycles. To validate these new developments, a modelization with the MANTA code of a real plant with direct Brayton cycle has been performed and steady-states and transients compared with recorded thermal hydraulic measures. Finally a comparison with the RELAP5 code has been done regarding transient calculations of the AREVA indirect cycle HTR project plant. Moreover to improve the user-friendliness in order to use MANTA as a systems conception, optimization design tool as well as a plant simulation tool, a Man- Machine-Interface is available. Acronyms: MANTA Modular Advanced Neutronic and Thermal hydraulic Analysis; HTGR High Temperature Gas-Cooled Reactor. (authors)

  5. Adaptive Multi-Layered Space-Time Block Coded Systems in Wireless Environments

    Al-Ghadhban, Samir

    2014-12-23

    © 2014, Springer Science+Business Media New York. Multi-layered space-time block coded systems (MLSTBC) strike a balance between spatial multiplexing and transmit diversity. In this paper, we analyze the block error rate performance of MLSTBC. In addition, we propose an adaptive MLSTBC schemes that are capable of accommodating the channel signal-to-noise ratio variation of wireless systems by near instantaneously adapting the uplink transmission configuration. The main results demonstrate that significant effective throughput improvements can be achieved while maintaining a certain target bit error rate.

  6. High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering

    Nelson Eduardo Diaz

    2015-09-01

    Full Text Available The coded aperture snapshot spectral imaging system (CASSI is an imaging architecture which senses the three dimensional informa-tion of a scene with two dimensional (2D focal plane array (FPA coded projection measurements. A reconstruction algorithm takes advantage of the compressive measurements sparsity to recover the underlying 3D data cube. Traditionally, CASSI uses block-un-block coded apertures (BCA to spatially modulate the light. In CASSI the quality of the reconstructed images depends on the design of these coded apertures and the FPA dynamic range. This work presents a new CASSI architecture based on grayscaled coded apertu-res (GCA which reduce the FPA saturation and increase the dynamic range of the reconstructed images. The set of GCA is calculated in a real-time adaptive manner exploiting the information from the FPA compressive measurements. Extensive simulations show the attained improvement in the quality of the reconstructed images when GCA are employed.  In addition, a comparison between traditional coded apertures and GCA is realized with respect to noise tolerance.

  7. New adaptive differencing strategy in the PENTRAN 3-d parallel Sn code

    Sjoden, G.E.; Haghighat, A.

    1996-01-01

    It is known that three-dimensional (3-D) discrete ordinates (S n ) transport problems require an immense amount of storage and computational effort to solve. For this reason, parallel codes that offer a capability to completely decompose the angular, energy, and spatial domains among a distributed network of processors are required. One such code recently developed is PENTRAN, which iteratively solves 3-D multi-group, anisotropic S n problems on distributed-memory platforms, such as the IBM-SP2. Because large problems typically contain several different material zones with various properties, available differencing schemes should automatically adapt to the transport physics in each material zone. To minimize the memory and message-passing overhead required for massively parallel S n applications, available differencing schemes in an adaptive strategy should also offer reasonable accuracy and positivity, yet require only the zeroth spatial moment of the transport equation; differencing schemes based on higher spatial moments, in spite of their greater accuracy, require at least twice the amount of storage and communication cost for implementation in a massively parallel transport code. This paper discusses a new adaptive differencing strategy that uses increasingly accurate schemes with low parallel memory and communication overhead. This strategy, implemented in PENTRAN, includes a new scheme, exponential directional averaged (EDA) differencing

  8. Block-based wavelet transform coding of mammograms with region-adaptive quantization

    Moon, Nam Su; Song, Jun S.; Kwon, Musik; Kim, JongHyo; Lee, ChoongWoong

    1998-06-01

    To achieve both high compression ratio and information preserving, it is an efficient way to combine segmentation and lossy compression scheme. Microcalcification in mammogram is one of the most significant sign of early stage of breast cancer. Therefore in coding, detection and segmentation of microcalcification enable us to preserve it well by allocating more bits to it than to other regions. Segmentation of microcalcification is performed both in spatial domain and in wavelet transform domain. Peak error controllable quantization step, which is off-line designed, is suitable for medical image compression. For region-adaptive quantization, block- based wavelet transform coding is adopted and different peak- error-constrained quantizers are applied to blocks according to the segmentation result. In view of preservation of microcalcification, the proposed coding scheme shows better performance than JPEG.

  9. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  10. Adaptive coded aperture imaging in the infrared: towards a practical implementation

    Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley

    2008-08-01

    An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.

  11. Adaptation of Zerotrees Using Signed Binary Digit Representations for 3D Image Coding

    Mailhes Corinne

    2007-01-01

    Full Text Available Zerotrees of wavelet coefficients have shown a good adaptability for the compression of three-dimensional images. EZW, the original algorithm using zerotree, shows good performance and was successfully adapted to 3D image compression. This paper focuses on the adaptation of EZW for the compression of hyperspectral images. The subordinate pass is suppressed to remove the necessity to keep the significant pixels in memory. To compensate the loss due to this removal, signed binary digit representations are used to increase the efficiency of zerotrees. Contextual arithmetic coding with very limited contexts is also used. Finally, we show that this simplified version of 3D-EZW performs almost as well as the original one.

  12. Normalized value coding explains dynamic adaptation in the human valuation process.

    Khaw, Mel W; Glimcher, Paul W; Louie, Kenway

    2017-11-28

    The notion of subjective value is central to choice theories in ecology, economics, and psychology, serving as an integrated decision variable by which options are compared. Subjective value is often assumed to be an absolute quantity, determined in a static manner by the properties of an individual option. Recent neurobiological studies, however, have shown that neural value coding dynamically adapts to the statistics of the recent reward environment, introducing an intrinsic temporal context dependence into the neural representation of value. Whether valuation exhibits this kind of dynamic adaptation at the behavioral level is unknown. Here, we show that the valuation process in human subjects adapts to the history of previous values, with current valuations varying inversely with the average value of recently observed items. The dynamics of this adaptive valuation are captured by divisive normalization, linking these temporal context effects to spatial context effects in decision making as well as spatial and temporal context effects in perception. These findings suggest that adaptation is a universal feature of neural information processing and offer a unifying explanation for contextual phenomena in fields ranging from visual psychophysics to economic choice.

  13. Evidence of translation efficiency adaptation of the coding regions of the bacteriophage lambda.

    Goz, Eli; Mioduser, Oriah; Diament, Alon; Tuller, Tamir

    2017-08-01

    Deciphering the way gene expression regulatory aspects are encoded in viral genomes is a challenging mission with ramifications related to all biomedical disciplines. Here, we aimed to understand how the evolution shapes the bacteriophage lambda genes by performing a high resolution analysis of ribosomal profiling data and gene expression related synonymous/silent information encoded in bacteriophage coding regions.We demonstrated evidence of selection for distinct compositions of synonymous codons in early and late viral genes related to the adaptation of translation efficiency to different bacteriophage developmental stages. Specifically, we showed that evolution of viral coding regions is driven, among others, by selection for codons with higher decoding rates; during the initial/progressive stages of infection the decoding rates in early/late genes were found to be superior to those in late/early genes, respectively. Moreover, we argued that selection for translation efficiency could be partially explained by adaptation to Escherichia coli tRNA pool and the fact that it can change during the bacteriophage life cycle.An analysis of additional aspects related to the expression of viral genes, such as mRNA folding and more complex/longer regulatory signals in the coding regions, is also reported. The reported conclusions are likely to be relevant also to additional viruses. © The Author 2017. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  14. Design and Analysis of Adaptive Message Coding on LDPC Decoder with Faulty Storage

    Guangjun Ge

    2018-01-01

    Full Text Available Unreliable message storage severely degrades the performance of LDPC decoders. This paper discusses the impacts of message errors on LDPC decoders and schemes improving the robustness. Firstly, we develop a discrete density evolution analysis for faulty LDPC decoders, which indicates that protecting the sign bits of messages is effective enough for finite-precision LDPC decoders. Secondly, we analyze the effects of quantization precision loss for static sign bit protection and propose an embedded dynamic coding scheme by adaptively employing the least significant bits (LSBs to protect the sign bits. Thirdly, we give a construction of Hamming product code for the adaptive coding and present low complexity decoding algorithms. Theoretic analysis indicates that the proposed scheme outperforms traditional triple modular redundancy (TMR scheme in decoding both threshold and residual errors, while Monte Carlo simulations show that the performance loss is less than 0.2 dB when the storage error probability varies from 10-3 to 10-4.

  15. CosmosDG: An hp-adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    Anninos, Peter; Bryant, Colton; Fragile, P. Chris; Holgado, A. Miguel; Lau, Cheuk; Nemergut, Daniel

    2017-08-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge-Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  16. CosmosDG: An hp -adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    Anninos, Peter; Lau, Cheuk [Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, CA 94550 (United States); Bryant, Colton [Department of Engineering Sciences and Applied Mathematics, Northwestern University, 2145 Sheridan Road, Evanston, Illinois, 60208 (United States); Fragile, P. Chris [Department of Physics and Astronomy, College of Charleston, 66 George Street, Charleston, SC 29424 (United States); Holgado, A. Miguel [Department of Astronomy and National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, Urbana, Illinois, 61801 (United States); Nemergut, Daniel [Operations and Engineering Division, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States)

    2017-08-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge–Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  17. CosmosDG: An hp -adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    Anninos, Peter; Lau, Cheuk; Bryant, Colton; Fragile, P. Chris; Holgado, A. Miguel; Nemergut, Daniel

    2017-01-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge–Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  18. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  19. Adapting Canada's northern infrastructure to climate change: the role of codes and standards

    Steenhof, P.

    2009-01-01

    This report provides the results of a research project that investigated the use of codes and standards in terms of their potential for fostering adaptation to the future impacts of climate change on built infrastructure in Canada's north. This involved a literature review, undertaking key informant interviews, and a workshop where key stakeholders came together to dialogue on the challenges facing built infrastructure in the north as a result of climate change and the role of codes and standards to help mitigate climate change risk. In this article, attention is given to the topic area of climate data and information requirements related to climate and climate change. This was an important focal area that was identified through this broader research effort since adequate data is essential in allowing codes and standards to meet their ultimate policy objective. A number of priorities have been identified specific to data and information needs in the context of the research topic investigated: There is a need to include northerners in developing the climate and permafrost data required for codes and standards so that these reflect the unique geographical, economic, and cultural realities and variability of the north; Efforts should be undertaken to realign climate design values so that they reflect both present and future risks; There is a need for better information on the rate and extent of permafrost degradation in the north; and, There is a need to improve monitoring of the rate of climate change in the Arctic. (author)

  20. Cooperative and Adaptive Network Coding for Gradient Based Routing in Wireless Sensor Networks with Multiple Sinks

    M. E. Migabo

    2017-01-01

    Full Text Available Despite its low computational cost, the Gradient Based Routing (GBR broadcast of interest messages in Wireless Sensor Networks (WSNs causes significant packets duplications and unnecessary packets transmissions. This results in energy wastage, traffic load imbalance, high network traffic, and low throughput. Thanks to the emergence of fast and powerful processors, the development of efficient network coding strategies is expected to enable efficient packets aggregations and reduce packets retransmissions. For multiple sinks WSNs, the challenge consists of efficiently selecting a suitable network coding scheme. This article proposes a Cooperative and Adaptive Network Coding for GBR (CoAdNC-GBR technique which considers the network density as dynamically defined by the average number of neighbouring nodes, to efficiently aggregate interest messages. The aggregation is performed by means of linear combinations of random coefficients of a finite Galois Field of variable size GF(2S at each node and the decoding is performed by means of Gaussian elimination. The obtained results reveal that, by exploiting the cooperation of the multiple sinks, the CoAdNC-GBR not only improves the transmission reliability of links and lowers the number of transmissions and the propagation latency, but also enhances the energy efficiency of the network when compared to the GBR-network coding (GBR-NC techniques.

  1. Adaptation of OCA-P, a probabilistic fracture-mechanics code, to a personal computer

    Ball, D.G.; Cheverton, R.D.

    1985-01-01

    The OCA-P probabilistic fracture-mechanics code can now be executed on a personal computer with 512 kilobytes of memory, a math coprocessor, and a hard disk. A user's guide for the particular adaptation has been prepared, and additional importance sampling techniques for OCA-P have been developed that allow the sampling of only the tails of selected distributions. Features have also been added to OCA-P that permit RTNDT to be used as an ''independent'' variable in the calculation of P

  2. Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

    Calvo, M.; González-Pinto, S.; Montijano, J. I.

    2008-09-01

    Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with

  3. Quadrature amplitude modulation from basics to adaptive trellis-coded turbo-equalised and space-time coded OFDM CDMA and MC-CDMA systems

    Hanzo, Lajos

    2004-01-01

    "Now fully revised and updated, with more than 300 pages of new material, this new edition presents the wide range of recent developments in the field and places particular emphasis on the family of coded modulation aided OFDM and CDMA schemes. In addition, it also includes a fully revised chapter on adaptive modulation and a new chapter characterizing the design trade-offs of adaptive modulation and space-time coding." "In summary, this volume amalgamates a comprehensive textbook with a deep research monograph on the topic of QAM, ensuring it has a wide-ranging appeal for both senior undergraduate and postgraduate students as well as practicing engineers and researchers."--Jacket.

  4. Tree-based solvers for adaptive mesh refinement code FLASH - I: gravity and optical depths

    Wünsch, R.; Walch, S.; Dinnbier, F.; Whitworth, A.

    2018-04-01

    We describe an OctTree algorithm for the MPI parallel, adaptive mesh refinement code FLASH, which can be used to calculate the gas self-gravity, and also the angle-averaged local optical depth, for treating ambient diffuse radiation. The algorithm communicates to the different processors only those parts of the tree that are needed to perform the tree-walk locally. The advantage of this approach is a relatively low memory requirement, important in particular for the optical depth calculation, which needs to process information from many different directions. This feature also enables a general tree-based radiation transport algorithm that will be described in a subsequent paper, and delivers excellent scaling up to at least 1500 cores. Boundary conditions for gravity can be either isolated or periodic, and they can be specified in each direction independently, using a newly developed generalization of the Ewald method. The gravity calculation can be accelerated with the adaptive block update technique by partially re-using the solution from the previous time-step. Comparison with the FLASH internal multigrid gravity solver shows that tree-based methods provide a competitive alternative, particularly for problems with isolated or mixed boundary conditions. We evaluate several multipole acceptance criteria (MACs) and identify a relatively simple approximate partial error MAC which provides high accuracy at low computational cost. The optical depth estimates are found to agree very well with those of the RADMC-3D radiation transport code, with the tree-solver being much faster. Our algorithm is available in the standard release of the FLASH code in version 4.0 and later.

  5. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.

    Laureline Logiaco

    2015-08-01

    Full Text Available The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators.

  6. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.

    Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo

    2015-08-01

    The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators.

  7. Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes.

    Chien, Tsair-Wei; Lin, Weir-Sen

    2016-03-02

    The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients' true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access.

  8. An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks.

    Yu, Shidi; Liu, Xiao; Liu, Anfeng; Xiong, Naixue; Cai, Zhiping; Wang, Tian

    2018-05-10

    Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed

  9. An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks

    Shidi Yu

    2018-05-01

    Full Text Available Due to the Software Defined Network (SDN technology, Wireless Sensor Networks (WSNs are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1 with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2 As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3 The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that

  10. A study on climatic adaptation of dipteran mitochondrial protein coding genes

    Debajyoti Kabiraj

    2017-10-01

    Full Text Available Diptera, the true flies are frequently found in nature and their habitat is found all over the world including Antarctica and Polar Regions. The number of documented species for order diptera is quite high and thought to be 14% of the total animal present in the earth [1]. Most of the study in diptera has focused on the taxa of economic and medical importance, such as the fruit flies Ceratitis capitata and Bactrocera spp. (Tephritidae, which are serious agricultural pests; the blowflies (Calliphoridae and oestrid flies (Oestridae, which can cause myiasis; the anopheles mosquitoes (Culicidae, are the vectors of malaria; and leaf-miners (Agromyzidae, vegetable and horticultural pests [2]. Insect mitochondrion consists of 13 protein coding genes, 22 tRNAs and 2 rRNAs, are the remnant portion of alpha-proteobacteria is responsible for simultaneous function of energy production and thermoregulation of the cell through the bi-genomic system thus different adaptability in different climatic condition might have compensated by complementary changes is the both genomes [3,4]. In this study we have collected complete mitochondrial genome and occurrence data of one hundred thirteen such dipteran insects from different databases and literature survey. Our understanding of the genetic basis of climatic adaptation in diptera is limited to the basic information on the occurrence location of those species and mito genetic factors underlying changes in conspicuous phenotypes. To examine this hypothesis, we have taken an approach of Nucleotide substitution analysis for 13 protein coding genes of mitochondrial DNA individually and combined by different software for monophyletic group as well as paraphyletic group of dipteran species. Moreover, we have also calculated codon adaptation index for all dipteran mitochondrial protein coding genes. Following this work, we have classified our sample organisms according to their location data from GBIF (https

  11. Coding and decoding with adapting neurons: a population approach to the peri-stimulus time histogram.

    Naud, Richard; Gerstner, Wulfram

    2012-01-01

    The response of a neuron to a time-dependent stimulus, as measured in a Peri-Stimulus-Time-Histogram (PSTH), exhibits an intricate temporal structure that reflects potential temporal coding principles. Here we analyze the encoding and decoding of PSTHs for spiking neurons with arbitrary refractoriness and adaptation. As a modeling framework, we use the spike response model, also known as the generalized linear neuron model. Because of refractoriness, the effect of the most recent spike on the spiking probability a few milliseconds later is very strong. The influence of the last spike needs therefore to be described with high precision, while the rest of the neuronal spiking history merely introduces an average self-inhibition or adaptation that depends on the expected number of past spikes but not on the exact spike timings. Based on these insights, we derive a 'quasi-renewal equation' which is shown to yield an excellent description of the firing rate of adapting neurons. We explore the domain of validity of the quasi-renewal equation and compare it with other rate equations for populations of spiking neurons. The problem of decoding the stimulus from the population response (or PSTH) is addressed analogously. We find that for small levels of activity and weak adaptation, a simple accumulator of the past activity is sufficient to decode the original input, but when refractory effects become large decoding becomes a non-linear function of the past activity. The results presented here can be applied to the mean-field analysis of coupled neuron networks, but also to arbitrary point processes with negative self-interaction.

  12. Adaptive transmission based on multi-relay selection and rate-compatible LDPC codes

    Su, Hualing; He, Yucheng; Zhou, Lin

    2017-08-01

    In order to adapt to the dynamical changeable channel condition and improve the transmissive reliability of the system, a cooperation system of rate-compatible low density parity check (RC-LDPC) codes combining with multi-relay selection protocol is proposed. In traditional relay selection protocol, only the channel state information (CSI) of source-relay and the CSI of relay-destination has been considered. The multi-relay selection protocol proposed by this paper takes the CSI between relays into extra account in order to obtain more chances of collabration. Additionally, the idea of hybrid automatic request retransmission (HARQ) and rate-compatible are introduced. Simulation results show that the transmissive reliability of the system can be significantly improved by the proposed protocol.

  13. Image sensor system with bio-inspired efficient coding and adaptation.

    Okuno, Hirotsugu; Yagi, Tetsuya

    2012-08-01

    We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.

  14. Inclusion of the fitness sharing technique in an evolutionary algorithm to analyze the fitness landscape of the genetic code adaptability.

    Santos, José; Monteagudo, Ángel

    2017-03-27

    The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.

  15. High-throughput sample adaptive offset hardware architecture for high-efficiency video coding

    Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin

    2018-03-01

    A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.

  16. WHITE DWARF MERGERS ON ADAPTIVE MESHES. I. METHODOLOGY AND CODE VERIFICATION

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas [Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY, 11794-3800 (United States); Almgren, Ann S.; Zhang, Weiqun [Center for Computational Sciences and Engineering, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2016-03-10

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  17. Adaptive Coding and Modulation Experiment With NASA's Space Communication and Navigation Testbed

    Downey, Joseph; Mortensen, Dale; Evans, Michael; Briones, Janette; Tollis, Nicholas

    2016-01-01

    National Aeronautics and Space Administration (NASA)'s Space Communication and Navigation Testbed is an advanced integrated communication payload on the International Space Station. This paper presents results from an adaptive coding and modulation (ACM) experiment over S-band using a direct-to-earth link between the SCaN Testbed and the Glenn Research Center. The testing leverages the established Digital Video Broadcasting Second Generation (DVB-S2) standard to provide various modulation and coding options, and uses the Space Data Link Protocol (Consultative Committee for Space Data Systems (CCSDS) standard) for the uplink and downlink data framing. The experiment was conducted in a challenging environment due to the multipath and shadowing caused by the International Space Station structure. Several approaches for improving the ACM system are presented, including predictive and learning techniques to accommodate signal fades. Performance of the system is evaluated as a function of end-to-end system latency (round-trip delay), and compared to the capacity of the link. Finally, improvements over standard NASA waveforms are presented.

  18. An adaptive mode-driven spatiotemporal motion vector prediction for wavelet video coding

    Zhao, Fan; Liu, Guizhong; Qi, Yong

    2010-07-01

    The three-dimensional subband/wavelet codecs use 5/3 filters rather than Haar filters for the motion compensation temporal filtering (MCTF) to improve the coding gain. In order to curb the increased motion vector rate, an adaptive motion mode driven spatiotemporal motion vector prediction (AMDST-MVP) scheme is proposed. First, by making use of the direction histograms of four motion vector fields resulting from the initial spatial motion vector prediction (SMVP), the motion mode of the current GOP is determined according to whether the fast or complex motion exists in the current GOP. Then the GOP-level MVP scheme is thereby determined by either the S-MVP or the AMDST-MVP, namely, AMDST-MVP is the combination of S-MVP and temporal-MVP (T-MVP). If the latter is adopted, the motion vector difference (MVD) between the neighboring MV fields and the S-MVP resulting MV of the current block is employed to decide whether or not the MV of co-located block in the previous frame is used for prediction the current block. Experimental results show that AMDST-MVP not only can improve the coding efficiency but also reduce the number of computation complexity.

  19. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-08-12

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  20. Enhancement of combined heat and power economic dispatch using self adaptive real-coded genetic algorithm

    Subbaraj, P. [Kalasalingam University, Srivilliputhur, Tamilnadu 626 190 (India); Rengaraj, R. [Electrical and Electronics Engineering, S.S.N. College of Engineering, Old Mahabalipuram Road, Thirupporur (T.K), Kalavakkam, Kancheepuram (Dist.) 603 110, Tamilnadu (India); Salivahanan, S. [S.S.N. College of Engineering, Old Mahabalipuram Road, Thirupporur (T.K), Kalavakkam, Kancheepuram (Dist.) 603 110, Tamilnadu (India)

    2009-06-15

    In this paper, a self adaptive real-coded genetic algorithm (SARGA) is implemented to solve the combined heat and power economic dispatch (CHPED) problem. The self adaptation is achieved by means of tournament selection along with simulated binary crossover (SBX). The selection process has a powerful exploration capability by creating tournaments between two solutions. The better solution is chosen and placed in the mating pool leading to better convergence and reduced computational burden. The SARGA integrates penalty parameterless constraint handling strategy and simultaneously handles equality and inequality constraints. The population diversity is introduced by making use of distribution index in SBX operator to create a better offspring. This leads to a high diversity in population which can increase the probability towards the global optimum and prevent premature convergence. The SARGA is applied to solve CHPED problem with bounded feasible operating region which has large number of local minima. The numerical results demonstrate that the proposed method can find a solution towards the global optimum and compares favourably with other recent methods in terms of solution quality, handling constraints and computation time. (author)

  1. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  2. Comparison of plateletpheresis on the Fresenius AS.TEC 204 and Haemonetics MCS 3p.

    Ranganathan, Sudha

    2007-02-01

    This is an attempt at comparing two cell separators for plateletpheresis, namely the Fresenius AS.TEC 204 and Haemonetics MCS 3p, at a tertiary care center in India. Donors who weighed between 55-75 kg, who had a hematocrit of 41-43%, and platelet counts of 250x10(3)-400x10(3)/microl were selected for the study. The comparability of the donors who donated on the two cell separators were analysed by t-test independent samples and no significant differences were found (P>0.05). The features compared were time taken for the procedure, volume processed on the separators, adverse reactions of the donors, quality control of the product, separation efficiency of the separators, platelet loss in the donors after the procedure, and the predictor versus the actual yield of platelets given by the cell separator. The volume processed to get a target yield of >3x10(11) was equal to 2.8-3.2 l and equal in both the cell separators. Symptoms of citrate toxicity were seen in 4 and 2.5% of donors who donated on the MCS 3p and the AS.TEC 204, respectively, and 3 and 1% of donors, respectively, had vasovagal reactions. All the platelet products collected had a platelet count of >3x10(11); 90% of the platelet products collected on the AS.TEC 204 attained the predicted yield that was set on the cell separator where as 75% of the platelet products collected on the MCS 3p attained the target yield. Quality control of the platelets collected on both the cell separators complied with the standards except that 3% of the platelets collected on the MCS 3p had a visible red cell contamination. The separation efficiency of the MCS 3p was higher, 50-52% as compared to the 40-45% on the AS.TEC 204. A provision of double venous access, less adverse reactions, negligible RBC contamination with a better predictor yield of platelets makes the AS.TEC 204 a safer and more reliable alternative than the widely used Haemonetics MCS 3p. Copyright (c) 2006 Wiley-Liss, Inc.

  3. Analysis and evaluation of the ASTEC model basis on plant simulations. 2. Technical report

    Koppers, Vera; Braehler, Thimo; Koch, Marco K.

    2015-06-01

    The present report is the 2 nd Technical Report of the research project ''Analysis and Evaluation of the ASTEC model basis'' funded by the Federal Ministry for Economic Affairs and Energy (BMWi 1501433) and conducted at the Reactor Simulation and Safety Group, Chair of Energy Systems and Energy Economics (LEE) at Ruhr-Universitaet Bochum. Within this report, the quality of a nuclear power plant simulation with ASTEC is investigated. Different parameters are varied to analyze the simulation stability within the dataset, which describes a generic German Konvoi power plant and is deposit in the program set of ASTEC. Firstly, plant specifications in the data set are checked for plausibility. In addition, the compliance of the data set with the nodalization rules is verified. After that, the stationary phase, in which no accident is calculated, is analyzed and parametric studies are performed in the transient phase, focusing on the primary and secondary circuit as well as on the containment behavior. The performed calculations focusing the primary and secondary circuit indicate a high dependency of the simulation results on the user's input in the data set. There are significant deviations between each simulations results, for example in the different calculated point of time of the reactor pressure vessel failure. Already changes in the stationary phase cause a significantly earlier reactor pressure vessel failure compared to the simulation with the original data. Beyond that, the location of the leakage of the reactor pressure vessel lower head varied and therefore cannot be clearly determined, although there have been no changes by the user on the accident course. A reliable indication of the plant behavior under severe accident conditions is therefore difficult using ASTEC. The results of the parametric studies within the Containment show the same significant influence of certain parameter changes on the simulation results. By using the

  4. Adaptation in Coding by Large Populations of Neurons in the Retina

    Ioffe, Mark L.

    A comprehensive theory of neural computation requires an understanding of the statistical properties of the neural population code. The focus of this work is the experimental study and theoretical analysis of the statistical properties of neural activity in the tiger salamander retina. This is an accessible yet complex system, for which we control the visual input and record from a substantial portion--greater than a half--of the ganglion cell population generating the spiking output. Our experiments probe adaptation of the retina to visual statistics: a central feature of sensory systems which have to adjust their limited dynamic range to a far larger space of possible inputs. In Chapter 1 we place our work in context with a brief overview of the relevant background. In Chapter 2 we describe the experimental methodology of recording from 100+ ganglion cells in the tiger salamander retina. In Chapter 3 we first present the measurements of adaptation of individual cells to changes in stimulation statistics and then investigate whether pairwise correlations in fluctuations of ganglion cell activity change across different stimulation conditions. We then transition to a study of the population-level probability distribution of the retinal response captured with maximum-entropy models. Convergence of the model inference is presented in Chapter 4. In Chapter 5 we first test the empirical presence of a phase transition in such models fitting the retinal response to different experimental conditions, and then proceed to develop other characterizations which are sensitive to complexity in the interaction matrix. This includes an analysis of the dynamics of sampling at finite temperature, which demonstrates a range of subtle attractor-like properties in the energy landscape. These are largely conserved when ambient illumination is varied 1000-fold, a result not necessarily apparent from the measured low-order statistics of the distribution. Our results form a consistent

  5. Computationally Efficient Blind Code Synchronization for Asynchronous DS-CDMA Systems with Adaptive Antenna Arrays

    Chia-Chang Hu

    2005-04-01

    Full Text Available A novel space-time adaptive near-far robust code-synchronization array detector for asynchronous DS-CDMA systems is developed in this paper. There are the same basic requirements that are needed by the conventional matched filter of an asynchronous DS-CDMA system. For the real-time applicability, a computationally efficient architecture of the proposed detector is developed that is based on the concept of the multistage Wiener filter (MWF of Goldstein and Reed. This multistage technique results in a self-synchronizing detection criterion that requires no inversion or eigendecomposition of a covariance matrix. As a consequence, this detector achieves a complexity that is only a linear function of the size of antenna array (J, the rank of the MWF (M, the system processing gain (N, and the number of samples in a chip interval (S, that is, 𝒪(JMNS. The complexity of the equivalent detector based on the minimum mean-squared error (MMSE or the subspace-based eigenstructure analysis is a function of 𝒪((JNS3. Moreover, this multistage scheme provides a rapid adaptive convergence under limited observation-data support. Simulations are conducted to evaluate the performance and convergence behavior of the proposed detector with the size of the J-element antenna array, the amount of the L-sample support, and the rank of the M-stage MWF. The performance advantage of the proposed detector over other DS-CDMA detectors is investigated as well.

  6. Adaptive Iterative Soft-Input Soft-Output Parallel Decision-Feedback Detectors for Asynchronous Coded DS-CDMA Systems

    Zhang Wei

    2005-01-01

    Full Text Available The optimum and many suboptimum iterative soft-input soft-output (SISO multiuser detectors require a priori information about the multiuser system, such as the users' transmitted signature waveforms, relative delays, as well as the channel impulse response. In this paper, we employ adaptive algorithms in the SISO multiuser detector in order to avoid the need for this a priori information. First, we derive the optimum SISO parallel decision-feedback detector for asynchronous coded DS-CDMA systems. Then, we propose two adaptive versions of this SISO detector, which are based on the normalized least mean square (NLMS and recursive least squares (RLS algorithms. Our SISO adaptive detectors effectively exploit the a priori information of coded symbols, whose soft inputs are obtained from a bank of single-user decoders. Furthermore, we consider how to select practical finite feedforward and feedback filter lengths to obtain a good tradeoff between the performance and computational complexity of the receiver.

  7. Analyzing and modeling the BIP Orgi aqueous formation tests with ASTEC-IODE code

    Vela-Garcia, M.; Herranz, L. E.

    2011-07-01

    In the event of a severe accident, some fission products could be released from the fuel and become airborne in the reactor containment atmosphere. The potential radiological impact of iodine in case of a postulated severe accident because of its bio-sensitivity (Thyroid) and volatility makes iodine become one of the most important concerns in these scenarios.

  8. Efficient Spike-Coding with Multiplicative Adaptation in a Spike Response Model

    S.M. Bohte (Sander)

    2012-01-01

    htmlabstractNeural adaptation underlies the ability of neurons to maximize encoded informa- tion over a wide dynamic range of input stimuli. While adaptation is an intrinsic feature of neuronal models like the Hodgkin-Huxley model, the challenge is to in- tegrate adaptation in models of neural

  9. Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment

    Oscar Karnalim

    2017-01-01

    Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.

  10. Anti-voice adaptation suggests prototype-based coding of voice identity

    Marianne eLatinus

    2011-07-01

    Full Text Available We used perceptual aftereffects induced by adaptation with anti-voice stimuli to investigate voice identity representations. Participants learned a set of voices then were tested on a voice identification task with vowel stimuli morphed between identities, after different conditions of adaptation. In Experiment 1, participants chose the identity opposite to the adapting anti-voice significantly more often than the other two identities (e.g., after being adapted to anti-A, they identified the average voice as A. In Experiment 2, participants showed a bias for identities opposite to the adaptor specifically for anti-voice, but not for non anti-voice adaptors. These results are strikingly similar to adaptation aftereffects observed for facial identity. They are compatible with a representation of individual voice identities in a multidimensional perceptual voice space referenced on a voice prototype.

  11. Adaptation and implementation of the TRACE code for transient analysis in designs lead cooled fast reactors

    Lazaro, A.; Ammirabile, L.; Martorell, S.

    2015-01-01

    Lead-Cooled Fast Reactor (LFR) has been identified as one of promising future reactor concepts in the technology road map of the Generation IVC International Forum (GIF)as well as in the Deployment Strategy of the European Sustainable Nuclear Industrial Initiative (ESNII), both aiming at improved sustainability, enhanced safety, economic competitiveness, and proliferation resistance. This new nuclear reactor concept requires the development of computational tools to be applied in design and safety assessments to confirm improved inherent and passive safety features of this design. One approach to this issue is to modify the current computational codes developed for the simulation of Light Water Reactors towards their applicability for the new designs. This paper reports on the performed modifications of the TRACE system code to make it applicable to LFR safety assessments. The capabilities of the modified code are demonstrated on series of benchmark exercises performed versus other safety analysis codes. (Author)

  12. Adaptive Network Coded Clouds: High Speed Downloads and Cost-Effective Version Control

    Sipos, Marton A.; Heide, Janus; Roetter, Daniel Enrique Lucani

    2018-01-01

    Although cloud systems provide a reliable and flexible storage solution, the use of a single cloud service constitutes a single point of failure, which can compromise data availability, download speed, and security. To address these challenges, we advocate for the use of multiple cloud storage...... providers simultaneously using network coding as the key enabling technology. Our goal is to study two challenges of network coded storage systems. First, the efficient update of the number of coded fragments per cloud in a system aggregating multiple clouds in order to boost the download speed of files. We...... developed a novel scheme using recoding with limited packets to trade-off storage space, reliability, and data retrieval speed. Implementation and measurements with commercial cloud providers show that up to 9x less network use is needed compared to other network coding schemes, while maintaining similar...

  13. An Adaptation of the HELIOS/MASTER Code System to the Analysis of VHTR Cores

    Noh, Jae Man; Lee, Hyun Chul; Kim, Kang Seog; Kim, Yong Hee

    2006-01-01

    KAERI is developing a new computer code system for an analysis of VHTR cores based on the existing HELIOS/MASTER code system which was originally developed for a LWR core analysis. In the VHTR reactor physics, there are several unique neutronic characteristics that cannot be handled easily by the conventional computer code system applied for the LWR core analysis. Typical examples of such characteristics are a double heterogeneity problem due to the particulate fuels, the effects of a spectrum shift and a thermal up-scattering due to the graphite moderator, and a strong fuel/reflector interaction, etc. In order to facilitate an easy treatment of such characteristics, we developed some methodologies for the HELIOS/MASTER code system and tested their applicability to the VHTR core analysis

  14. Scalable Stream Coding for Adaptive Foveation Enhanced Percept Multimedia Information Communication for Interactive Medical Applications

    Khan, Javed

    2003-01-01

    .... The demonstrated systems include interactive perceptual transcoding where real-time eye-tracker data fuses with a passing stream, the active subnet diffusion coding-- where multiple active nodes...

  15. Adaptive Multi-Layered Space-Time Block Coded Systems in Wireless Environments

    Al-Ghadhban, Samir

    2014-01-01

    © 2014, Springer Science+Business Media New York. Multi-layered space-time block coded systems (MLSTBC) strike a balance between spatial multiplexing and transmit diversity. In this paper, we analyze the block error rate performance of MLSTBC

  16. LDPC code decoding adapted to the precoded partial response magnetic recording channels

    Lee, Jun; Kim, Kyuyong; Lee, Jaejin; Yang, Gijoo

    2004-01-01

    We propose a signal processing technique using LDPC (low-density parity-check) code instead of PRML (partial response maximum likelihood) system for the longitudinal magnetic recording channel. The scheme is designed by the precoder admitting level detection at the receiver-end and modifying the likelihood function for LDPC code decoding. The scheme can be collaborated with other decoder for turbo-like systems. The proposed algorithm can contribute to improve the performance of the conventional turbo-like systems

  17. LDPC code decoding adapted to the precoded partial response magnetic recording channels

    Lee, Jun E-mail: leejun28@sait.samsung.co.kr; Kim, Kyuyong; Lee, Jaejin; Yang, Gijoo

    2004-05-01

    We propose a signal processing technique using LDPC (low-density parity-check) code instead of PRML (partial response maximum likelihood) system for the longitudinal magnetic recording channel. The scheme is designed by the precoder admitting level detection at the receiver-end and modifying the likelihood function for LDPC code decoding. The scheme can be collaborated with other decoder for turbo-like systems. The proposed algorithm can contribute to improve the performance of the conventional turbo-like systems.

  18. On the feedback error compensation for adaptive modulation and coding scheme

    Choi, Seyeong

    2011-11-25

    In this paper, we consider the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of perfect feedback channels. We quantify the performance of two joint AMDC schemes in the presence of feedback error, in terms of the average spectral efficiency, the average number of combined paths, and the average bit error rate. The benefit of feedback error compensation with adaptive combining is also quantified. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. Copyright (c) 2011 John Wiley & Sons, Ltd.

  19. On the feedback error compensation for adaptive modulation and coding scheme

    Choi, Seyeong; Yang, Hong-Chuan; Alouini, Mohamed-Slim

    2011-01-01

    In this paper, we consider the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of perfect feedback channels. We quantify

  20. Link adaptation algorithm for distributed coded transmissions in cooperative OFDMA systems

    Varga, Mihaly; Badiu, Mihai Alin; Bota, Vasile

    2015-01-01

    This paper proposes a link adaptation algorithm for cooperative transmissions in the down-link connection of an OFDMA-based wireless system. The algorithm aims at maximizing the spectral efficiency of a relay-aided communication link, while satisfying the block error rate constraints at both...... adaptation algorithm has linear complexity with the number of available resource blocks, while still provides a very good performance, as shown by simulation results....

  1. Content Adaptive Lagrange Multiplier Selection for Rate-Distortion Optimization in 3-D Wavelet-Based Scalable Video Coding

    Ying Chen

    2018-03-01

    Full Text Available Rate-distortion optimization (RDO plays an essential role in substantially enhancing the coding efficiency. Currently, rate-distortion optimized mode decision is widely used in scalable video coding (SVC. Among all the possible coding modes, it aims to select the one which has the best trade-off between bitrate and compression distortion. Specifically, this tradeoff is tuned through the choice of the Lagrange multiplier. Despite the prevalence of conventional method for Lagrange multiplier selection in hybrid video coding, the underlying formulation is not applicable to 3-D wavelet-based SVC where the explicit values of the quantization step are not available, with on consideration of the content features of input signal. In this paper, an efficient content adaptive Lagrange multiplier selection algorithm is proposed in the context of RDO for 3-D wavelet-based SVC targeting quality scalability. Our contributions are two-fold. First, we introduce a novel weighting method, which takes account of the mutual information, gradient per pixel, and texture homogeneity to measure the temporal subband characteristics after applying the motion-compensated temporal filtering (MCTF technique. Second, based on the proposed subband weighting factor model, we derive the optimal Lagrange multiplier. Experimental results demonstrate that the proposed algorithm enables more satisfactory video quality with negligible additional computational complexity.

  2. Code bench-marking for long-term tracking and adaptive algorithms

    Schmidt, Frank; Alexahin, Yuri; Amundson, James; Bartosik, Hannes; Franchetti, Giuliano; Holmes, Jeffrey; Huschauer, Alexander; Kapin, Valery; Oeftiger, Adrian; Stern, Eric; Titze, Malte

    2016-01-01

    At CERN we have ramped up a program to investigate space charge effects in the LHC pre-injectors with high brightness beams and long storage times. This in view of the LIU upgrade project for these accelerators. These studies require massive simulation over large number of turns. To this end we have been looking at all available codes and started collaborations on code development with several laboratories: pyORBIT from SNS, SYNERGIA from Fermilab, MICROMAP from GSI and our in-house MAD-X cod...

  3. Autistic traits are linked to reduced adaptive coding of face identity and selectively poorer face recognition in men but not women.

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Ewing, Louise

    2013-11-01

    Our ability to discriminate and recognize thousands of faces despite their similarity as visual patterns relies on adaptive, norm-based, coding mechanisms that are continuously updated by experience. Reduced adaptive coding of face identity has been proposed as a neurocognitive endophenotype for autism, because it is found in autism and in relatives of individuals with autism. Autistic traits can also extend continuously into the general population, raising the possibility that reduced adaptive coding of face identity may be more generally associated with autistic traits. In the present study, we investigated whether adaptive coding of face identity decreases as autistic traits increase in an undergraduate population. Adaptive coding was measured using face identity aftereffects, and autistic traits were measured using the Autism-Spectrum Quotient (AQ) and its subscales. We also measured face and car recognition ability to determine whether autistic traits are selectively related to face recognition difficulties. We found that men who scored higher on levels of autistic traits related to social interaction had reduced adaptive coding of face identity. This result is consistent with the idea that atypical adaptive face-coding mechanisms are an endophenotype for autism. Autistic traits were also linked with face-selective recognition difficulties in men. However, there were some unexpected sex differences. In women, autistic traits were linked positively, rather than negatively, with adaptive coding of identity, and were unrelated to face-selective recognition difficulties. These sex differences indicate that autistic traits can have different neurocognitive correlates in men and women and raise the intriguing possibility that endophenotypes of autism can differ in males and females. © 2013 Elsevier Ltd. All rights reserved.

  4. Computerized coding system for life narratives to assess students' personality adaption

    He, Q.; Veldkamp, B.P.; Westerhof, G.J.; Pechenizkiy, Mykola; Calders, Toon; Conati, Cristina; Ventura, Sebastian; Romero, Cristobal; Stamper, John

    2011-01-01

    The present study is a trial in developing an automatic computerized coding framework with text mining techniques to identify the characteristics of redemption and contamination in life narratives written by undergraduate students. In the initial stage of text classification, the keyword-based

  5. Power Allocation Optimization: Linear Precoding Adapted to NB-LDPC Coded MIMO Transmission

    Tarek Chehade

    2015-01-01

    Full Text Available In multiple-input multiple-output (MIMO transmission systems, the channel state information (CSI at the transmitter can be used to add linear precoding to the transmitted signals in order to improve the performance and the reliability of the transmission system. This paper investigates how to properly join precoded closed-loop MIMO systems and nonbinary low density parity check (NB-LDPC. The q elements in the Galois field, GF(q, are directly mapped to q transmit symbol vectors. This allows NB-LDPC codes to perfectly fit with a MIMO precoding scheme, unlike binary LDPC codes. The new transmission model is detailed and studied for several linear precoders and various designed LDPC codes. We show that NB-LDPC codes are particularly well suited to be jointly used with precoding schemes based on the maximization of the minimum Euclidean distance (max-dmin criterion. These results are theoretically supported by extrinsic information transfer (EXIT analysis and are confirmed by numerical simulations.

  6. Model of nuclear reactor type VVER-1000/V-320 built by computer code ATHLET-CD

    Georgiev, Yoto; Filipov, Kalin; Velev, Vladimir

    2014-01-01

    A model of nuclear reactor type VVER-1000 V-320 developed for computer code ATHLET-CD2.1A is presented. Validation of the has been made, in the analysis of the station blackout scenario with LOCA on fourth cold leg is shown. As the calculation has been completed, the results are checked through comparison with the results from the computer codes ATHLET-2.1A, ASTEC-2.1 and RELAP5mod3.2

  7. Simulation of the in-pile test Phebus-FPT3 using ASTEC V2 and ATHLET-CD 2.1A

    Kruse, Philipp; Koch, Marco K. [Bochum Univ. (Germany). Chair of Energy Systems and Energy Economics

    2011-07-01

    The Phebus-FPT programme, initiated in 1988 by the 'Institut de Radioprotection et de Surete Nucleaire' (IRSN) and the Joint Research Centre (JRC) of the European Commission (EC), was performed in the Phebus facility operated by 'Commissariat a'Energie Atomique' (CEA). The facility represent a 900 MWe Pressurized Water Reactor (PWR) scaled down by a factor 1:5000 which objective is to study fuel degradation and the subsequent release, transport and retention of fission products, structure, control rod and fuel materials, in case of a severe accident. The Phebus-FPT programme consists of integral in-pile tests, varying the fuel burn-up and geometry, the control rod nature, the thermal hydraulic conditions in the bundle and through the experimental circuit as well as in the containment. In primary, the integral experiments should outline a detailed description of the main phenomena of core degradation, fission product release and transport as well as radionuclide interactions. Due to that it is possible to analyse the physical and chemical processes due to a severe accident. With the ascertained data, an evaluation of the accident management measures could be made as well. A secondary aim of the Phebus tests was to enable model development and evaluation of severe accident codes such like ASTEC and ATHLET-CD. (orig.)

  8. Validation of MCCI models implemented in ASTEC MEDICIS on OECD CCI-2 and CCI-3 experiments and further consideration on reactor cases

    Agethen, K.; Koch, M.K., E-mail: agethen@lee.rub.de, E-mail: koch@lee.rub.de [Ruhr-Universitat Bochum, Energy Systems and Energy Economics, Reactor Simulation and Safety Group, Bochum (Germany)

    2014-07-01

    Within a severe accident in a light water reactor a loss of coolant can result in core melting and vessel failure. Afterwards, molten core material may discharge into the containment cavity and interact with the concrete basemat. Due to concrete erosion gases are released, which lead to exothermic oxidation reactions with the metals in the corium and to formation of combustible mixtures. In this work the MEDICIS module of the Accident Source Term Evaluation Code (ASTEC) is validated on experiments of the OECD CCI programme. The primary focus is set on the CCI-2 experiment with limestone common sand (LCS) concrete, in which nearly homogenous erosion appeared, and the CCI-3 experiment with siliceous concrete, in which increased lateral erosion occurred. These experiments enable the analysis of heat transfer depending on the axial and radial orientation from the interior of the melt to the surrounding surfaces and the impact of top flooding. For the simulation of both tests, two existing models in MEDICIS are used and analysed. Results of simulations show a good agreement of ablation behaviour, layer temperature and energy balance with experimental results. Furthermore the issue of a quasi-steady state in the energy balance for the long term appeared. Finally the basic data are scaled up to a generic reactor scenario, which shows that this quasi-steady state similarly occurred. (author)

  9. Adaptation of Toodee-2 computer code for reflood analysis in Angra-1 reactor

    Praes, J.G.L.; Onusic Junior, J.

    1981-01-01

    A method of calculation the heat transfer coefficient used in Toodee-2 computer code for core reflood analysis in a loss of coolant accident, is presented. Preliminary results are presented with the use of heat transfer correlations based on FLECHT experiments adequate to a geometric arrangement such as 16 x 16 (Angra I). Optional calculations are suggested for the heat transfer coefficients when the cooling of fuel cladding by steam is used. (Author) [pt

  10. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  11. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    Dongyul Lee

    2014-01-01

    Full Text Available The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC with adaptive modulation and coding (AMC provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  12. A software reconfigurable optical multiband UWB system utilizing a bit-loading combined with adaptive LDPC code rate scheme

    He, Jing; Dai, Min; Chen, Qinghui; Deng, Rui; Xiang, Changqing; Chen, Lin

    2017-07-01

    In this paper, an effective bit-loading combined with adaptive LDPC code rate algorithm is proposed and investigated in software reconfigurable multiband UWB over fiber system. To compensate the power fading and chromatic dispersion for the high frequency of multiband OFDM UWB signal transmission over standard single mode fiber (SSMF), a Mach-Zehnder modulator (MZM) with negative chirp parameter is utilized. In addition, the negative power penalty of -1 dB for 128 QAM multiband OFDM UWB signal are measured at the hard-decision forward error correction (HD-FEC) limitation of 3.8 × 10-3 after 50 km SSMF transmission. The experimental results show that, compared to the fixed coding scheme with the code rate of 75%, the signal-to-noise (SNR) is improved by 2.79 dB for 128 QAM multiband OFDM UWB system after 100 km SSMF transmission using ALCR algorithm. Moreover, by employing bit-loading combined with ALCR algorithm, the bit error rate (BER) performance of system can be further promoted effectively. The simulation results present that, at the HD-FEC limitation, the value of Q factor is improved by 3.93 dB at the SNR of 19.5 dB over 100 km SSMF transmission, compared to the fixed modulation with uncoded scheme at the same spectrum efficiency (SE).

  13. Temporal Scalability through Adaptive -Band Filter Banks for Robust H.264/MPEG-4 AVC Video Coding

    Pau G

    2006-01-01

    Full Text Available This paper presents different structures that use adaptive -band hierarchical filter banks for temporal scalability. Open-loop and closed-loop configurations are introduced and illustrated using existing video codecs. In particular, it is shown that the H.264/MPEG-4 AVC codec allows us to introduce scalability by frame shuffling operations, thus keeping backward compatibility with the standard. The large set of shuffling patterns introduced here can be exploited to adapt the encoding process to the video content features, as well as to the user equipment and transmission channel characteristics. Furthermore, simulation results show that this scalability is obtained with no degradation in terms of subjective and objective quality in error-free environments, while in error-prone channels the scalable versions provide increased robustness.

  14. Radiation transport code with adaptive Mesh Refinement: acceleration techniques and applications

    Velarde, Pedro; Garcia-Fernaandez, Carlos; Portillo, David; Barbas, Alfonso

    2011-01-01

    We present a study of acceleration techniques for solving Sn radiation transport equations with Adaptive Mesh Refinement (AMR). Both DSA and TSA are considered, taking into account the influence of the interaction between different levels of the mesh structure and the order of approximation in angle. A Hybrid method is proposed in order to obtain better convergence rate and lower computer times. Some examples are presented relevant to ICF and X ray secondary sources. (author)

  15. Adaptive colour contrast coding in the salamander retina efficiently matches natural scene statistics.

    Genadiy Vasserman

    Full Text Available The visual system continually adjusts its sensitivity to the statistical properties of the environment through an adaptation process that starts in the retina. Colour perception and processing is commonly thought to occur mainly in high visual areas, and indeed most evidence for chromatic colour contrast adaptation comes from cortical studies. We show that colour contrast adaptation starts in the retina where ganglion cells adjust their responses to the spectral properties of the environment. We demonstrate that the ganglion cells match their responses to red-blue stimulus combinations according to the relative contrast of each of the input channels by rotating their functional response properties in colour space. Using measurements of the chromatic statistics of natural environments, we show that the retina balances inputs from the two (red and blue stimulated colour channels, as would be expected from theoretical optimal behaviour. Our results suggest that colour is encoded in the retina based on the efficient processing of spectral information that matches spectral combinations in natural scenes on the colour processing level.

  16. Algorithms and data structures for massively parallel generic adaptive finite element codes

    Bangerth, Wolfgang

    2011-12-01

    Today\\'s largest supercomputers have 100,000s of processor cores and offer the potential to solve partial differential equations discretized by billions of unknowns. However, the complexity of scaling to such large machines and problem sizes has so far prevented the emergence of generic software libraries that support such computations, although these would lower the threshold of entry and enable many more applications to benefit from large-scale computing. We are concerned with providing this functionality for mesh-adaptive finite element computations. We assume the existence of an "oracle" that implements the generation and modification of an adaptive mesh distributed across many processors, and that responds to queries about its structure. Based on querying the oracle, we develop scalable algorithms and data structures for generic finite element methods. Specifically, we consider the parallel distribution of mesh data, global enumeration of degrees of freedom, constraints, and postprocessing. Our algorithms remove the bottlenecks that typically limit large-scale adaptive finite element analyses. We demonstrate scalability of complete finite element workflows on up to 16,384 processors. An implementation of the proposed algorithms, based on the open source software p4est as mesh oracle, is provided under an open source license through the widely used deal.II finite element software library. © 2011 ACM 0098-3500/2011/12-ART10 $10.00.

  17. Adaptation of fuel code for light water reactor with austenitic steel rod cladding

    Gomes, Daniel de Souza; Silva, Antonio Teixeira; Giovedi, Claudia

    2015-01-01

    Light water reactors were used with steel as nuclear fuel cladding from 1960 to 1980. The high performance proved that the use of low-carbon alloys could substitute the current zirconium alloys. Stainless steel is an alternative that can be used as cladding. The zirconium alloys replaced the steel. However, significant experiences in-pile occurred, in commercial units such as Haddam Neck, Indian Point, and Yankee experiences. Stainless Steel Types 347 and 348 can be used as cladding. An advantage of using Stainless Steel was evident in Fukushima when a large number of hydrogens was produced at high temperatures. The steel cladding does not eliminate the problem of accumulating free hydrogen, which can lead to a risk of explosion. In a boiling water reactor, environments easily exist for the attack of intergranular corrosion. The Stainless Steel alloys, Types 321, 347, and 348, are stabilized against attack by the addition of titanium, niobium, or tantalum. The steel Type 348 is composed of niobium, tantalum, and cobalt. Titanium preserves type 321, and niobium additions stabilize type 347. In recent years, research has increased on studying the effects of irradiation by fast neutrons. The impact of radiation includes changes in flow rate limits, deformation, and ductility. The irradiation can convert crystalline lattices into an amorphous structure. New proposals are emerging that suggest using a silicon carbide-based fuel rod cladding or iron-chromium-aluminum alloys. These materials can substitute the classic zirconium alloys. Once the steel Type 348 was chosen, the thermal and mechanical properties were coded in a library of functions. The fuel performance codes contain all features. A comparative analysis of the steel and zirconium alloys was made. The results demonstrate that the austenitic steel alloys are the viable candidates for substituting the zirconium alloys. (author)

  18. Adaptation of fuel code for light water reactor with austenitic steel rod cladding

    Gomes, Daniel de Souza; Silva, Antonio Teixeira, E-mail: dsgomes@ipen.br, E-mail: teixeira@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Giovedi, Claudia, E-mail: claudia.giovedi@labrisco.usp.br [Universidade de Sao Paulo (POLI/USP), Sao Paulo, SP (Brazil). Lab. de Analise, Avaliacao e Gerenciamento de Risco

    2015-07-01

    Light water reactors were used with steel as nuclear fuel cladding from 1960 to 1980. The high performance proved that the use of low-carbon alloys could substitute the current zirconium alloys. Stainless steel is an alternative that can be used as cladding. The zirconium alloys replaced the steel. However, significant experiences in-pile occurred, in commercial units such as Haddam Neck, Indian Point, and Yankee experiences. Stainless Steel Types 347 and 348 can be used as cladding. An advantage of using Stainless Steel was evident in Fukushima when a large number of hydrogens was produced at high temperatures. The steel cladding does not eliminate the problem of accumulating free hydrogen, which can lead to a risk of explosion. In a boiling water reactor, environments easily exist for the attack of intergranular corrosion. The Stainless Steel alloys, Types 321, 347, and 348, are stabilized against attack by the addition of titanium, niobium, or tantalum. The steel Type 348 is composed of niobium, tantalum, and cobalt. Titanium preserves type 321, and niobium additions stabilize type 347. In recent years, research has increased on studying the effects of irradiation by fast neutrons. The impact of radiation includes changes in flow rate limits, deformation, and ductility. The irradiation can convert crystalline lattices into an amorphous structure. New proposals are emerging that suggest using a silicon carbide-based fuel rod cladding or iron-chromium-aluminum alloys. These materials can substitute the classic zirconium alloys. Once the steel Type 348 was chosen, the thermal and mechanical properties were coded in a library of functions. The fuel performance codes contain all features. A comparative analysis of the steel and zirconium alloys was made. The results demonstrate that the austenitic steel alloys are the viable candidates for substituting the zirconium alloys. (author)

  19. Efficacy of systematic pelvic lymphadenectomy in endometrial cancer (MRC ASTEC trial): a randomised study.

    Kitchener, H; Swart, A M C; Qian, Q; Amos, C; Parmar, M K B

    2009-01-10

    Hysterectomy and bilateral salpingo-oophorectomy (BSO) is the standard surgery for stage I endometrial cancer. Systematic pelvic lymphadenectomy has been used to establish whether there is extra-uterine disease and as a therapeutic procedure; however, randomised trials need to be done to assess therapeutic efficacy. The ASTEC surgical trial investigated whether pelvic lymphadenectomy could improve survival of women with endometrial cancer. From 85 centres in four countries, 1408 women with histologically proven endometrial carcinoma thought preoperatively to be confined to the corpus were randomly allocated by a minimisation method to standard surgery (hysterectomy and BSO, peritoneal washings, and palpation of para-aortic nodes; n=704) or standard surgery plus lymphadenectomy (n=704). The primary outcome measure was overall survival. To control for postsurgical treatment, women with early-stage disease at intermediate or high risk of recurrence were randomised (independent of lymph-node status) into the ASTEC radiotherapy trial. Analysis was by intention to treat. This study is registered, number ISRCTN 16571884. After a median follow-up of 37 months (IQR 24-58), 191 women (88 standard surgery group, 103 lymphadenectomy group) had died, with a hazard ratio (HR) of 1.16 (95% CI 0.87-1.54; p=0.31) in favour of standard surgery and an absolute difference in 5-year overall survival of 1% (95% CI -4 to 6). 251 women died or had recurrent disease (107 standard surgery group, 144 lymphadenectomy group), with an HR of 1.35 (1.06-1.73; p=0.017) in favour of standard surgery and an absolute difference in 5-year recurrence-free survival of 6% (1-12). With adjustment for baseline characteristics and pathology details, the HR for overall survival was 1.04 (0.74-1.45; p=0.83) and for recurrence-free survival was 1.25 (0.93-1.66; p=0.14). Our results show no evidence of benefit in terms of overall or recurrence-free survival for pelvic lymphadenectomy in women with early

  20. Implementation and adaption of the Computer Code ECOSYS/EXCEL for Austria as OECOSYS/EXCEL

    Hick, H.; Suda, M.; Mueck, K.

    1998-03-01

    During 1989, under contract to the Austrian Chamber of the Federal Chancellor, department VII, the radioecological forecast model OECOSYS was implemented by the Austrian Research Centre Seibersdorf on a VAX computer using VAX Fortran. OECOSYS allows the prediction of the consequences after a large scale contamination event. During 1992, under contract to the Austrian Federal Ministry of Health, Sports and Consumer Protection, department III OECOSYS - in the version of 1989 - was implemented on PC's in Seibersdorf and the Ministry using OS/2 and Microsoft -Fortran. In March 1993, the Ministry ordered an update which had become necessary and the evaluation of two exercise scenarios. Since that time the prognosis model with its auxiliary program and communication facilities is kept on stand-by and yearly exercises are performed to maintain its readiness. The current report describes the implementation and adaption to Austrian conditions of the newly available EXCEL version of the German ECOSYS prognosis model as OECOSYS. (author)

  1. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    Zhao, Gong-Bo; Li, Baojiu; Koyama, Kazuya

    2011-02-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu [Phys. Rev. DPRVDAQ1550-7998 78, 123524 (2008)10.1103/PhysRevD.78.123524] and Schmidt [Phys. Rev. DPRVDAQ1550-7998 79, 083518 (2009)10.1103/PhysRevD.79.083518], and extend the resolution up to k˜20h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  2. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    Zhao Gongbo; Koyama, Kazuya; Li Baojiu

    2011-01-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k∼20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  3. Uplink capacity of multi-class IEEE 802.16j relay networks with adaptive modulation and coding

    Wang, Hua; Xiong, C; Iversen, Villy Bæk

    2009-01-01

    The emerging IEEE 802.16j mobile multi-hop relay (MMR) network is currently being developed to increase the user throughput and extend the service coverage as an enhancement of existing 802.16e standard. In 802.16j, the intermediate relay stations (RSs) help the base station (BS) communicate...... with those mobile stations (MSs) that are either too far away from the BS or placed in an area where direct communication with BS experiences unsatisfactory level of service. In this paper, we investigate the uplink Erlang capacity of a two-hop 802.16j relay system supporting both voice and data traffics...... with adaptive modulation and coding (AMC) scheme applied in the physical layer. We first develop analytical models to calculate the blocking probability in the access zone and the outage probability in the relay zone, respectively. Then a joint algorithm is proposed to determine the bandwidth distribution...

  4. Conception and development of an adaptive energy mesher for multigroup library generation of the transport codes

    Mosca, P.

    2009-12-01

    The deterministic transport codes solve the stationary Boltzmann equation in a discretized energy formalism called multigroup. The transformation of continuous data in a multigroup form is obtained by averaging the highly variable cross sections of the resonant isotopes with the solution of the self-shielding models and the remaining ones with the coarse energy spectrum of the reactor type. So far the error of such an approach could only be evaluated retrospectively. To remedy this, we studied in this thesis a set of methods to control a priori the accuracy and the cost of the multigroup transport computation. The energy mesh optimisation is achieved using a two step process: the creation of a reference mesh and its optimized condensation. In the first stage, by refining locally and globally the energy mesh, we seek, on a fine energy mesh with subgroup self-shielding, a solution equivalent to a reference solver (Monte Carlo or pointwise deterministic solver). In the second step, once fixed the number of groups, depending on the acceptable computational cost, and chosen the most appropriate self-shielding models to the reactor type, we look for the best bounds of the reference mesh minimizing reaction rate errors by the particle swarm optimization algorithm. This new approach allows us to define new meshes for fast reactors as accurate as the currently used ones, but with fewer groups. (author)

  5. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding.

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-04-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering--CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes--MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme.

  6. Perceptual Coding of Audio Signals Using Adaptive Time-Frequency Transform

    Umapathy Karthikeyan

    2007-01-01

    Full Text Available Wide band digital audio signals have a very high data-rate associated with them due to their complex nature and demand for high-quality reproduction. Although recent technological advancements have significantly reduced the cost of bandwidth and miniaturized storage facilities, the rapid increase in the volume of digital audio content constantly compels the need for better compression algorithms. Over the years various perceptually lossless compression techniques have been introduced, and transform-based compression techniques have made a significant impact in recent years. In this paper, we propose one such transform-based compression technique, where the joint time-frequency (TF properties of the nonstationary nature of the audio signals were exploited in creating a compact energy representation of the signal in fewer coefficients. The decomposition coefficients were processed and perceptually filtered to retain only the relevant coefficients. Perceptual filtering (psychoacoustics was applied in a novel way by analyzing and performing TF specific psychoacoustics experiments. An added advantage of the proposed technique is that, due to its signal adaptive nature, it does not need predetermined segmentation of audio signals for processing. Eight stereo audio signal samples of different varieties were used in the study. Subjective (mean opinion score—MOS listening tests were performed and the subjective difference grades (SDG were used to compare the performance of the proposed coder with MP3, AAC, and HE-AAC encoders. Compression ratios in the range of 8 to 40 were achieved by the proposed technique with subjective difference grades (SDG ranging from –0.53 to –2.27.

  7. Perceptual Coding of Audio Signals Using Adaptive Time-Frequency Transform

    Karthikeyan Umapathy

    2007-08-01

    Full Text Available Wide band digital audio signals have a very high data-rate associated with them due to their complex nature and demand for high-quality reproduction. Although recent technological advancements have significantly reduced the cost of bandwidth and miniaturized storage facilities, the rapid increase in the volume of digital audio content constantly compels the need for better compression algorithms. Over the years various perceptually lossless compression techniques have been introduced, and transform-based compression techniques have made a significant impact in recent years. In this paper, we propose one such transform-based compression technique, where the joint time-frequency (TF properties of the nonstationary nature of the audio signals were exploited in creating a compact energy representation of the signal in fewer coefficients. The decomposition coefficients were processed and perceptually filtered to retain only the relevant coefficients. Perceptual filtering (psychoacoustics was applied in a novel way by analyzing and performing TF specific psychoacoustics experiments. An added advantage of the proposed technique is that, due to its signal adaptive nature, it does not need predetermined segmentation of audio signals for processing. Eight stereo audio signal samples of different varieties were used in the study. Subjective (mean opinion score—MOS listening tests were performed and the subjective difference grades (SDG were used to compare the performance of the proposed coder with MP3, AAC, and HE-AAC encoders. Compression ratios in the range of 8 to 40 were achieved by the proposed technique with subjective difference grades (SDG ranging from –0.53 to –2.27.

  8. Adaptive Code Division Multiple Access Protocol for Wireless Network-on-Chip Architectures

    Vijayakumaran, Vineeth

    Massive levels of integration following Moore's Law ushered in a paradigm shift in the way on-chip interconnections were designed. With higher and higher number of cores on the same die traditional bus based interconnections are no longer a scalable communication infrastructure. On-chip networks were proposed enabled a scalable plug-and-play mechanism for interconnecting hundreds of cores on the same chip. Wired interconnects between the cores in a traditional Network-on-Chip (NoC) system, becomes a bottleneck with increase in the number of cores thereby increasing the latency and energy to transmit signals over them. Hence, there has been many alternative emerging interconnect technologies proposed, namely, 3D, photonic and multi-band RF interconnects. Although they provide better connectivity, higher speed and higher bandwidth compared to wired interconnects; they also face challenges with heat dissipation and manufacturing difficulties. On-chip wireless interconnects is one other alternative proposed which doesn't need physical interconnection layout as data travels over the wireless medium. They are integrated into a hybrid NOC architecture consisting of both wired and wireless links, which provides higher bandwidth, lower latency, lesser area overhead and reduced energy dissipation in communication. However, as the bandwidth of the wireless channels is limited, an efficient media access control (MAC) scheme is required to enhance the utilization of the available bandwidth. This thesis proposes using a multiple access mechanism such as Code Division Multiple Access (CDMA) to enable multiple transmitter-receiver pairs to send data over the wireless channel simultaneously. It will be shown that such a hybrid wireless NoC with an efficient CDMA based MAC protocol can significantly increase the performance of the system while lowering the energy dissipation in data transfer. In this work it is shown that the wireless NoC with the proposed CDMA based MAC protocol

  9. Adapting the coping in deliberation (CODE) framework: a multi-method approach in the context of familial ovarian cancer risk management.

    Witt, Jana; Elwyn, Glyn; Wood, Fiona; Rogers, Mark T; Menon, Usha; Brain, Kate

    2014-11-01

    To test whether the coping in deliberation (CODE) framework can be adapted to a specific preference-sensitive medical decision: risk-reducing bilateral salpingo-oophorectomy (RRSO) in women at increased risk of ovarian cancer. We performed a systematic literature search to identify issues important to women during deliberations about RRSO. Three focus groups with patients (most were pre-menopausal and untested for genetic mutations) and 11 interviews with health professionals were conducted to determine which issues mattered in the UK context. Data were used to adapt the generic CODE framework. The literature search yielded 49 relevant studies, which highlighted various issues and coping options important during deliberations, including mutation status, risks of surgery, family obligations, physician recommendation, peer support and reliable information sources. Consultations with UK stakeholders confirmed most of these factors as pertinent influences on deliberations. Questions in the generic framework were adapted to reflect the issues and coping options identified. The generic CODE framework was readily adapted to a specific preference-sensitive medical decision, showing that deliberations and coping are linked during deliberations about RRSO. Adapted versions of the CODE framework may be used to develop tailored decision support methods and materials in order to improve patient-centred care. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Comparison of different methods used in integral codes to model coagulation of aerosols

    Beketov, A. I.; Sorokin, A. A.; Alipchenkov, V. M.; Mosunova, N. A.

    2013-09-01

    The methods for calculating coagulation of particles in the carrying phase that are used in the integral codes SOCRAT, ASTEC, and MELCOR, as well as the Hounslow and Jacobson methods used to model aerosol processes in the chemical industry and in atmospheric investigations are compared on test problems and against experimental results in terms of their effectiveness and accuracy. It is shown that all methods are characterized by a significant error in modeling the distribution function for micrometer particles if calculations are performed using rather "coarse" spectra of particle sizes, namely, when the ratio of the volumes of particles from neighboring fractions is equal to or greater than two. With reference to the problems considered, the Hounslow method and the method applied in the aerosol module used in the ASTEC code are the most efficient ones for carrying out calculations.

  11. Polarization-multiplexed rate-adaptive non-binary-quasi-cyclic-LDPC-coded multilevel modulation with coherent detection for optical transport networks.

    Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M

    2010-02-01

    In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.

  12. Evaluation of a new neutron energy spectrum unfolding code based on an Adaptive Neuro-Fuzzy Inference System (ANFIS).

    Hosseini, Seyed Abolfazl; Esmaili Paeen Afrakoti, Iman

    2018-01-17

    The purpose of the present study was to reconstruct the energy spectrum of a poly-energetic neutron source using an algorithm developed based on an Adaptive Neuro-Fuzzy Inference System (ANFIS). ANFIS is a kind of artificial neural network based on the Takagi-Sugeno fuzzy inference system. The ANFIS algorithm uses the advantages of both fuzzy inference systems and artificial neural networks to improve the effectiveness of algorithms in various applications such as modeling, control and classification. The neutron pulse height distributions used as input data in the training procedure for the ANFIS algorithm were obtained from the simulations performed by MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). Taking into account the normalization condition of each energy spectrum, 4300 neutron energy spectra were generated randomly. (The value in each bin was generated randomly, and finally a normalization of each generated energy spectrum was performed). The randomly generated neutron energy spectra were considered as output data of the developed ANFIS computational code in the training step. To calculate the neutron energy spectrum using conventional methods, an inverse problem with an approximately singular response matrix (with the determinant of the matrix close to zero) should be solved. The solution of the inverse problem using the conventional methods unfold neutron energy spectrum with low accuracy. Application of the iterative algorithms in the solution of such a problem, or utilizing the intelligent algorithms (in which there is no need to solve the problem), is usually preferred for unfolding of the energy spectrum. Therefore, the main reason for development of intelligent algorithms like ANFIS for unfolding of neutron energy spectra is to avoid solving the inverse problem. In the present study, the unfolded neutron energy spectra of 252Cf and 241Am-9Be neutron sources using the developed computational code were

  13. Adaptation.

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  14. Adaptation

    building skills, knowledge or networks on adaptation, ... the African partners leading the AfricaAdapt network, together with the UK-based Institute of Development Studies; and ... UNCCD Secretariat, Regional Coordination Unit for Africa, Tunis, Tunisia .... 26 Rural–urban Cooperation on Water Management in the Context of.

  15. Coolability in the frame of core melt accidents in light water reactors. Model development and validation for ATHLET-CD and ASTEC. Final report; Kuehlbarkeit im Rahmen von Kernschmelzunfaellen bei Leichtwasserreaktoren. Modellentwicklung und Validierung fuer ATHLET-CD und ASTEC. Abschlussbericht

    Buck, Michael; Pohlner, Georg; Rahman, Saidur; Berkhan, Ana

    2015-07-15

    The code system ATHLET/ATHLET-CD is being developed in the frame of the reactor safety research of the German Federal Ministry for Economic Affairs and Energy (BMWi) within the topic analysis of transients and accident sequences. It serves for simulation of transients and accidents to be used in safety analyses for light water reactors. In the present project the development and validation of models for ATHLET-CD for description of the processes during severe accidents are continued. These works should enable broad safety analyses by a mechanistic description of the processes even during late phases of a degrading core and by this a profound estimation on coolability and accident management options during every phase. With the actual status of modelling in ATHLET-CD analyses on coolability are made to give a solid base for estimates about stabilization by cooling or accident progression, dependent on the scenario. The modeling in the MEWA module, describing the processes in a severely degraded core in ATHLET-CD, is extended on the processes in the lower plenum. For this, the model on melt pool behavior is extended and linked to the RPV wall. The coupling between MEWA and the thermal-hydraulics of ATHLET-CD is improved. The validation of the models is continued by calculations on new experiments and comparing analyses done in the frame of the European Network SARNET-2. For the European integral code ASTEC contributions from the modeling for ATHLET-CD will be done, especially by providing a model for the melt behavior in the lower plenum of a LWR. This report illustrates the work carried out in the frame of this project, and shows results of calculations and the status of validation by recalculations on experiments for debris bed coolability, melt pool behavior as well as jet fragmentation and debris bed formation.

  16. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    Kirk, B.L. [Oak Ridge National Lab., TN (United States); Sartori, E. [OCDE/OECD NEA Data Bank, Issy-les-Moulineaux (France); Viedma, L.G. de [Consejo de Seguridad Nuclear, Madrid (Spain)

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  17. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    Kirk, B.L.; Sartori, E.; Viedma, L.G. de

    1997-01-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee's Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community's computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management

  18. Application of MELCOR Code to a French PWR 900 MWe Severe Accident Sequence and Evaluation of Models Performance Focusing on In-Vessel Thermal Hydraulic Results

    De Rosa, Felice

    2006-01-01

    In the ambit of the Severe Accident Network of Excellence Project (SARNET), funded by the European Union, 6. FISA (Fission Safety) Programme, one of the main tasks is the development and validation of the European Accident Source Term Evaluation Code (ASTEC Code). One of the reference codes used to compare ASTEC results, coming from experimental and Reactor Plant applications, is MELCOR. ENEA is a SARNET member and also an ASTEC and MELCOR user. During the first 18 months of this project, we performed a series of MELCOR and ASTEC calculations referring to a French PWR 900 MWe and to the accident sequence of 'Loss of Steam Generator (SG) Feedwater' (known as H2 sequence in the French classification). H2 is an accident sequence substantially equivalent to a Station Blackout scenario, like a TMLB accident, with the only difference that in H2 sequence the scram is forced to occur with a delay of 28 seconds. The main events during the accident sequence are a loss of normal and auxiliary SG feedwater (0 s), followed by a scram when the water level in SG is equal or less than 0.7 m (after 28 seconds). There is also a main coolant pumps trip when ΔTsat < 10 deg. C, a total opening of the three relief valves when Tric (core maximal outlet temperature) is above 603 K (330 deg. C) and accumulators isolation when primary pressure goes below 1.5 MPa (15 bar). Among many other points, it is worth noting that this was the first time that a MELCOR 1.8.5 input deck was available for a French PWR 900. The main ENEA effort in this period was devoted to prepare the MELCOR input deck using the code version v.1.8.5 (build QZ Oct 2000 with the latest patch 185003 Oct 2001). The input deck, completely new, was prepared taking into account structure, data and same conditions as those found inside ASTEC input decks. The main goal of the work presented in this paper is to put in evidence where and when MELCOR provides good enough results and why, in some cases mainly referring to its

  19. Adapt

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  20. Automatic coding and selection of causes of death: an adaptation of Iris software for using in Brazil.

    Martins, Renata Cristófani; Buchalla, Cassia Maria

    2015-01-01

    To prepare a dictionary in Portuguese for using in Iris and to evaluate its completeness for coding causes of death. Iniatially, a dictionary with all illness and injuries was created based on the International Classification of Diseases - tenth revision (ICD-10) codes. This dictionary was based on two sources: the electronic file of ICD-10 volume 1 and the data from Thesaurus of the International Classification of Primary Care (ICPC-2). Then, a death certificate sample from the Program of Improvement of Mortality Information in São Paulo (PRO-AIM) was coded manually and by Iris version V4.0.34, and the causes of death were compared. Whenever Iris was not able to code the causes of death, adjustments were made in the dictionary. Iris was able to code all causes of death in 94.4% death certificates, but only 50.6% were directly coded, without adjustments. Among death certificates that the software was unable to fully code, 89.2% had a diagnosis of external causes (chapter XX of ICD-10). This group of causes of death showed less agreement when comparing the coding by Iris to the manual one. The software performed well, but it needs adjustments and improvement in its dictionary. In the upcoming versions of the software, its developers are trying to solve the external causes of death problem.

  1. Adaption, validation and application of advanced codes with 3-dimensional neutron kinetics for accident analysis calculations - STC with Bulgaria

    Grundmann, U.; Kliem, S.; Mittag, S.; Rohde, U.; Seidel, A.; Panayotov, D.; Ilieva, B.

    2001-08-01

    In the frame of a project on scientific-technical co-operation funded by BMBF/BMWi, the program code DYN3D and the coupled code ATHLET-DYN3D have been transferred to the Institute for Nuclear Research and Nuclear Energy (INRNE) Sofia. The coupled code represents an implementation of the 3D core model DYN3D developed by FZR into the GRS thermal-hydraulics code system ATHLET. For the purpose of validation of these codes, a measurement data base about a start-up experiment obtained at the unit 6 of Kozloduy NPP (VVER-1000/V-320) has been generated. The results of performed validation calculations were compared with measurement values from the data base. A simplified model for estimation of cross flow mixing between fuel assemblies has been implemented into the program code DYN3D by Bulgarian experts. Using this cross flow model, transient processes with asymmetrical boundary conditions can be analysed more realistic. The validation of the implemented model were performed with help of comparison calculations between modified DYD3D code and thermal-hydraulics code COBRA-4I, and also on the base of the collected measurement data from Kozloduy NPP. (orig.) [de

  2. Modification of the SAS4A Safety Analysis Code for Integration with the ADAPT Discrete Dynamic Event Tree Framework.

    Jankovsky, Zachary Kyle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Denman, Matthew R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    It is difficult to assess the consequences of a transient in a sodium-cooled fast reactor (SFR) using traditional probabilistic risk assessment (PRA) methods, as numerous safety-related sys- tems have passive characteristics. Often there is significant dependence on the value of con- tinuous stochastic parameters rather than binary success/failure determinations. One form of dynamic PRA uses a system simulator to represent the progression of a transient, tracking events through time in a discrete dynamic event tree (DDET). In order to function in a DDET environment, a simulator must have characteristics that make it amenable to changing physical parameters midway through the analysis. The SAS4A SFR system analysis code did not have these characteristics as received. This report describes the code modifications made to allow dynamic operation as well as the linking to a Sandia DDET driver code. A test case is briefly described to demonstrate the utility of the changes.

  3. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  4. Adaptation and implementation of the TRACE code for transient analysis on designs of cooled lead fast reactors

    Lazaro, A.; Ammirabile, L.; Martorell, S.

    2014-01-01

    The article describes the changes implemented in the TRACE code to include thermodynamic tables of liquid lead drawn from experimental results. He then explains the process for developing a thermohydraulic model for the prototype ALFRED and analysis of a selection of representative transient conducted within the framework of international research projects. The study demonstrates the applicability of TRACE code to simulate designs of cooled lead fast reactors and exposes the high safety margins are there in this technology to accommodate the most severe transients identified in their security study. (Author)

  5. Entropy Coding in HEVC

    Sze, Vivienne; Marpe, Detlev

    2014-01-01

    Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...

  6. Adaptation of multidimensional group particle tracking and particle wall-boundary condition model to the FDNS code

    Chen, Y. S.; Farmer, R. C.

    1992-01-01

    A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.

  7. Adjuvant external beam radiotherapy in the treatment of endometrial cancer (MRC ASTEC and NCIC CTG EN.5 randomised trials): pooled trial results, systematic review, and meta-analysis.

    Blake, P; Swart, Ann Marie; Orton, J; Kitchener, H; Whelan, T; Lukka, H; Eisenhauer, E; Bacon, M; Tu, D; Parmar, M K B; Amos, C; Murray, C; Qian, W

    2009-01-10

    Early endometrial cancer with low-risk pathological features can be successfully treated by surgery alone. External beam radiotherapy added to surgery has been investigated in several small trials, which have mainly included women at intermediate risk of recurrence. In these trials, postoperative radiotherapy has been shown to reduce the risk of isolated local recurrence but there is no evidence that it improves recurrence-free or overall survival. We report the findings from the ASTEC and EN.5 trials, which investigated adjuvant external beam radiotherapy in women with early-stage disease and pathological features suggestive of intermediate or high risk of recurrence and death from endometrial cancer. Between July, 1996, and March, 2005, 905 (789 ASTEC, 116 EN.5) women with intermediate-risk or high-risk early-stage disease from 112 centres in seven countries (UK, Canada, Poland, Norway, New Zealand, Australia, USA) were randomly assigned after surgery to observation (453) or to external beam radiotherapy (452). A target dose of 40-46 Gy in 20-25 daily fractions to the pelvis, treating five times a week, was specified. Primary outcome measure was overall survival, and all analyses were by intention to treat. These trials were registered ISRCTN 16571884 (ASTEC) and NCT 00002807 (EN.5). After a median follow-up of 58 months, 135 women (68 observation, 67 external beam radiotherapy) had died. There was no evidence that overall survival with external beam radiotherapy was better than observation, hazard ratio 1.05 (95% CI 0.75-1.48; p=0.77). 5-year overall survival was 84% in both groups. Combining data from ASTEC and EN.5 in a meta-analysis of trials confirmed that there was no benefit in terms of overall survival (hazard ratio 1.04; 95% CI 0.84-1.29) and can reliably exclude an absolute benefit of external beam radiotherapy at 5 years of more than 3%. With brachytherapy used in 53% of women in ASTEC/EN.5, the local recurrence rate in the observation group at 5 years

  8. From Algorithmic Black Boxes to Adaptive White Boxes: Declarative Decision-Theoretic Ethical Programs as Codes of Ethics

    van Otterlo, Martijn

    2017-01-01

    Ethics of algorithms is an emerging topic in various disciplines such as social science, law, and philosophy, but also artificial intelligence (AI). The value alignment problem expresses the challenge of (machine) learning values that are, in some way, aligned with human requirements or values. In this paper I argue for looking at how humans have formalized and communicated values, in professional codes of ethics, and for exploring declarative decision-theoretic ethical programs (DDTEP) to fo...

  9. Verification of the CENTRM Module for Adaptation of the SCALE Code to NGNP Prismatic and PBR Core Designs

    2014-01-01

    The generation of multigroup cross sections lies at the heart of the very high temperature reactor (VHTR) core design, whether the prismatic (block) or pebble-bed type. The design process, generally performed in three steps, is quite involved and its execution is crucial to proper reactor physics analyses. The primary purpose of this project is to develop the CENTRM cross-section processing module of the SCALE code package for application to prismatic or pebble-bed core designs. The team will include a detailed outline of the entire processing procedure for application of CENTRM in a final report complete with demonstration. In addition, they will conduct a thorough verification of the CENTRM code, which has yet to be performed. The tasks for this project are to: Thoroughly test the panel algorithm for neutron slowing down; Develop the panel algorithm for multi-materials; Establish a multigroup convergence 1D transport acceleration algorithm in the panel formalism; Verify CENTRM in 1D plane geometry; Create and test the corresponding transport/panel algorithm in spherical and cylindrical geometries; and, Apply the verified CENTRM code to current VHTR core design configurations for an infinite lattice, including assessing effectiveness of Dancoff corrections to simulate TRISO particle heterogeneity.

  10. Verification of the CENTRM Module for Adaptation of the SCALE Code to NGNP Prismatic and PBR Core Designs

    Ganapol, Barry; Maldonado, Ivan

    2014-01-23

    The generation of multigroup cross sections lies at the heart of the very high temperature reactor (VHTR) core design, whether the prismatic (block) or pebble-bed type. The design process, generally performed in three steps, is quite involved and its execution is crucial to proper reactor physics analyses. The primary purpose of this project is to develop the CENTRM cross-section processing module of the SCALE code package for application to prismatic or pebble-bed core designs. The team will include a detailed outline of the entire processing procedure for application of CENTRM in a final report complete with demonstration. In addition, they will conduct a thorough verification of the CENTRM code, which has yet to be performed. The tasks for this project are to: Thoroughly test the panel algorithm for neutron slowing down; Develop the panel algorithm for multi-materials; Establish a multigroup convergence 1D transport acceleration algorithm in the panel formalism; Verify CENTRM in 1D plane geometry; Create and test the corresponding transport/panel algorithm in spherical and cylindrical geometries; and, Apply the verified CENTRM code to current VHTR core design configurations for an infinite lattice, including assessing effectiveness of Dancoff corrections to simulate TRISO particle heterogeneity.

  11. Verification of the CENTRM Module for Adaptation of the SCALE Code to NGNP Prismatic and PBR Core Designs

    Ganapol, Barry; Maldonodo, Ivan

    2014-01-01

    The generation of multigroup cross sections lies at the heart of the very high temperature reactor (VHTR) core design, whether the prismatic (block) or pebble-bed type. The design process, generally performed in three steps, is quite involved and its execution is crucial to proper reactor physics analyses. The primary purpose of this project is to develop the CENTRM cross-section processing module of the SCALE code package for application to prismatic or pebble-bed core designs. The team will include a detailed outline of the entire processing procedure for application of CENTRM in a final report complete with demonstration. In addition, they will conduct a thorough verification of the CENTRM code, which has yet to be performed. The tasks for this project are to: Thoroughly test the panel algorithm for neutron slowing down; Develop the panel algorithm for multi-materials; Establish a multigroup convergence 1D transport acceleration algorithm in the panel formalism; Verify CENTRM in 1D plane geometry; Create and test the corresponding transport/panel algorithm in spherical and cylindrical geometries; and, Apply the verified CENTRM code to current VHTR core design configurations for an infinite lattice, including assessing effectiveness of Dancoff corrections to simulate TRISO particle heterogeneity

  12. Evaporation over sump surface in containment studies: code validation on TOSQAN tests

    Malet, J.; Gelain, T.; Degrees du Lou, O.; Daru, V.

    2011-01-01

    During the course of a severe accident in a Nuclear Power Plant, water can be collected in the sump containment through steam condensation on walls and spray systems activation. The objective of this paper is to present code validation on evaporative sump tests performed on the TOSQAN facility. The ASTEC-CPA code is used as a lumped-parameter code and specific user-defined-functions are developed for the TONUS-CFD code. The tests are air-steam tests, as well as tests with other non-condensable gases (He, CO 2 and SF 6 ) under steady and transient conditions. The results show a good agreement between codes and experiments, indicating a good behaviour of the sump models in both codes. (author)

  13. Adaptation the Abaqus thermomechanics code to simulate 3D multipellet steady and transient WWER fuel rod behavior

    Kuznetsov, A.V.; Kuznetsov, V.I.; Krupkin, A.V.; Novikov, V.V.

    2015-01-01

    The study of Abaqus technology capabilities for modeling the behavior of the WWER-1000 fuel element for the campaign, taking into account the following features: multi-contact thermomechanical interaction of fuel pellet and fuel can, accounting for creep and swelling of fuel, consideration of creep of the can, setting the mechanisms of thermophysical and mechanical behavior of the fuel - cladding gap. The code was tested on the following developed finite element models: 3D fuel element model with five fuel pellets, 3D fuel element model with one fuel pellet and cleavage in the gap, 3D model of the fuel rod section with one randomly fragmented tablet. The position of the WWER-1000 fuel rod section in the middle of the core and the loads and material properties corresponding to this location were considered. The principal possibility of using Abaqus technology for solving fuel design problems is shown [ru

  14. Automation and adaptation: Nurses’ problem-solving behavior following the implementation of bar coded medication administration technology

    Holden, Richard J.; Rivera-Rodriguez, A. Joy; Faye, Héléne; Scanlon, Matthew C.; Karsh, Ben-Tzion

    2012-01-01

    The most common change facing nurses today is new technology, particularly bar coded medication administration technology (BCMA). However, there is a dearth of knowledge on how BCMA alters nursing work. This study investigated how BCMA technology affected nursing work, particularly nurses’ operational problem-solving behavior. Cognitive systems engineering observations and interviews were conducted after the implementation of BCMA in three nursing units of a freestanding pediatric hospital. Problem-solving behavior, associated problems, and goals, were specifically defined and extracted from observed episodes of care. Three broad themes regarding BCMA’s impact on problem solving were identified. First, BCMA allowed nurses to invent new problem-solving behavior to deal with pre-existing problems. Second, BCMA made it difficult or impossible to apply some problem-solving behaviors that were commonly used pre-BCMA, often requiring nurses to use potentially risky workarounds to achieve their goals. Third, BCMA created new problems that nurses were either able to solve using familiar or novel problem-solving behaviors, or unable to solve effectively. Results from this study shed light on hidden hazards and suggest three critical design needs: (1) ecologically valid design; (2) anticipatory control; and (3) basic usability. Principled studies of the actual nature of clinicians’ work, including problem solving, are necessary to uncover hidden hazards and to inform health information technology design and redesign. PMID:24443642

  15. Dengue virus genomic variation associated with mosquito adaptation defines the pattern of viral non-coding RNAs and fitness in human cells.

    Claudia V Filomatori

    2017-03-01

    Full Text Available The Flavivirus genus includes a large number of medically relevant pathogens that cycle between humans and arthropods. This host alternation imposes a selective pressure on the viral population. Here, we found that dengue virus, the most important viral human pathogen transmitted by insects, evolved a mechanism to differentially regulate the production of viral non-coding RNAs in mosquitos and humans, with a significant impact on viral fitness in each host. Flavivirus infections accumulate non-coding RNAs derived from the viral 3'UTRs (known as sfRNAs, relevant in viral pathogenesis and immune evasion. We found that dengue virus host adaptation leads to the accumulation of different species of sfRNAs in vertebrate and invertebrate cells. This process does not depend on differences in the host machinery; but it was found to be dependent on the selection of specific mutations in the viral 3'UTR. Dissecting the viral population and studying phenotypes of cloned variants, the molecular determinants for the switch in the sfRNA pattern during host change were mapped to a single RNA structure. Point mutations selected in mosquito cells were sufficient to change the pattern of sfRNAs, induce higher type I interferon responses and reduce viral fitness in human cells, explaining the rapid clearance of certain viral variants after host change. In addition, using epidemic and pre-epidemic Zika viruses, similar patterns of sfRNAs were observed in mosquito and human infected cells, but they were different from those observed during dengue virus infections, indicating that distinct selective pressures act on the 3'UTR of these closely related viruses. In summary, we present a novel mechanism by which dengue virus evolved an RNA structure that is under strong selective pressure in the two hosts, as regulator of non-coding RNA accumulation and viral fitness. This work provides new ideas about the impact of host adaptation on the variability and evolution of

  16. Improved Transient Performance of a Fuzzy Modified Model Reference Adaptive Controller for an Interacting Coupled Tank System Using Real-Coded Genetic Algorithm

    Asan Mohideen Khansadurai

    2014-01-01

    Full Text Available The main objective of the paper is to design a model reference adaptive controller (MRAC with improved transient performance. A modification to the standard direct MRAC called fuzzy modified MRAC (FMRAC is used in the paper. The FMRAC uses a proportional control based Mamdani-type fuzzy logic controller (MFLC to improve the transient performance of a direct MRAC. The paper proposes the application of real-coded genetic algorithm (RGA to tune the membership function parameters of the proposed FMRAC offline so that the transient performance of the FMRAC is improved further. In this study, a GA based modified MRAC (GAMMRAC, an FMRAC, and a GA based FMRAC (GAFMRAC are designed for a coupled tank setup in a hybrid tank process and their transient performances are compared. The results show that the proposed GAFMRAC gives a better transient performance than the GAMMRAC or the FMRAC. It is concluded that the proposed controller can be used to obtain very good transient performance for the control of nonlinear processes.

  17. ertCPN: The adaptations of the coloured Petri-Net theory for real-time embedded system modeling and automatic code generation

    Wattanapong Kurdthongmee

    2003-05-01

    Full Text Available A real-time system is a computer system that monitors or controls an external environment. The system must meet various timing and other constraints that are imposed on it by the real-time behaviour of the external world. One of the differences between a real-time and a conventional software is that a real-time program must be both logically and temporally correct. To successfully design and implement a real-time system, some analysis is typically done to assure that requirements or designs are consistent and that they satisfy certain desirable properties that may not be immediately obvious from specification. Executable specifications, prototypes and simulation are particularly useful in real-time systems for debugging specifications. In this paper, we propose the adaptations to the coloured Petri-net theory to ease the modeling, simulation and code generation process of an embedded, microcontroller-based, real-time system. The benefits of the proposed approach are demonstrated by use of our prototype software tool called ENVisAge (an Extended Coloured Petri-Net Based Visual Application Generator Tool.

  18. Water evaporation over sump surface in nuclear containment studies: CFD and LP codes validation on TOSQAN tests

    Malet, J., E-mail: jeanne.malet@irsn.fr [Institut de Radioprotection et de Sûreté Nucléaire (IRSN), PSN-RES/SCA BP 68, 91192 Gif-sur-Yvette (France); Degrees du Lou, O. [Institut de Radioprotection et de Sûreté Nucléaire (IRSN), PSN-RES/SCA BP 68, 91192 Gif-sur-Yvette (France); Arts et Métiers ParisTech, DynFluid Lab. EA92, 151, boulevard de l’Hôpital, 75013 Paris (France); Gelain, T. [Institut de Radioprotection et de Sûreté Nucléaire (IRSN), PSN-RES/SCA BP 68, 91192 Gif-sur-Yvette (France)

    2013-10-15

    Highlights: • Simulations of evaporative TOSQAN sump tests are performed. • These tests are under air–steam gas conditions with addition of He, CO{sub 2} and SF{sub 6}. • ASTEC-CPA LP and TONUS-CFD codes with UDF for sump model are used. • Validation of sump models of both codes show good results. • The code–experiment differences are attributed to turbulent gas mixing modeling. -- Abstract: During the course of a severe accident in a Nuclear Power Plant, water can be collected in the sump containment through steam condensation on walls and spray systems activation. The objective of this paper is to present code validation on evaporative sump tests performed on TOSQAN facility. The ASTEC-CPA code is used as a lumped-parameter code and specific user-defined-functions are developed for the TONUS-CFD code. The seven tests are air–steam tests, as well as tests with other non-condensable gases (He, CO{sub 2} and SF{sub 6}) under steady and transient conditions (two depressurization tests). The results show a good agreement between codes and experiments, indicating a good behavior of the sump models in both codes. The sump model developed as User-Defined Functions (UDF) for TONUS is considered as well validated and is ‘ready-to-use’ for all CFD codes in which such UDF can be added. The remaining discrepancies between codes and experiments are caused by turbulent transport and gas mixing, especially in the presence of non-condensable gases other than air, so that code validation on this important topic for hydrogen safety analysis is still recommended.

  19. Modeling of fission product release in integral codes

    Obaidurrahman, K.; Raman, Rupak K.; Gaikwad, Avinash J.

    2014-01-01

    The Great Tohoku earthquake and tsunami that stroke the Fukushima-Daiichi nuclear power station in March 11, 2011 has intensified the needs of detailed nuclear safety research and with this objective all streams associated with severe accident phenomenology are being revisited thoroughly. The present paper would cover an overview of state of art FP release models being used, the important phenomenon considered in semi-mechanistic models and knowledge gaps in present FP release modeling. Capability of FP release module, ELSA of ASTEC integral code in appropriate prediction of FP release under several diversified core degraded conditions will also be demonstrated. Use of semi-mechanistic fission product release models at AERB in source-term estimation shall be briefed. (author)

  20. Coding for Electronic Mail

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  1. The role of lymphadenectomy in endometrial cancer: was the ASTEC trial doomed by design and are we destined to repeat that mistake?

    Naumann, R Wendel

    2012-07-01

    This study examines the design of previous and future trials of lymph node dissection in endometrial cancer. Data from previous trials were used to construct a decision analysis modeling the risk of lymphatic spread and the effects of treatment on patients with endometrial cancer. This model was then applied to previous trials as well as other future trial designs that might be used to address this subject. Comparing the predicted and actual results in the ASTEC trial, the model closely mimics the survival results with and without lymph node dissection for the low and high risk groups. The model suggests a survival difference of less than 2% between the experimental and control arms of the ASTEC trial under all circumstances. Sensitivity analyses reveal that these conclusions are robust. Future trial designs were also modeled with hysterectomy only, hysterectomy with radiation in intermediate risk patients, and staging with radiation only with node positive patients. Predicted outcomes for these approaches yield survival rates of 88%, 90%, and 93% in clinical stage I patients who have a risk of pelvic node involvement of approximately 7%. These estimates were 78%, 82%, and 89% in intermediate risk patients who have a risk of nodal spread of approximately 15%. This model accurately predicts the outcome of previous trials and demonstrates that even if lymph node dissection was therapeutic, these trials would have been negative due to study design. Furthermore, future trial designs that are being considered would need to be conducted in high-intermediate risk patients to detect any difference. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Code Cactus; Code Cactus

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  3. Used data bases to adapt the technical code of the construction (CTE) to the real climatology of Galicia; Bases de datos utilizadas para adaptar el codigo tecnico de la edificacion (CTE) a la climatologia real de Galicia

    Vazquez, M.; Izquierdo, P.; Pose, M.; Prado, M. T.; Santos, J.

    2008-07-01

    The paper explains the data base used in the research on the variables solar radiation, air ambient temperature, air ambient relative humidity, and rivers surface water temperature, that was realized to analyze and adapt the Spanish Technical Building Code (TBC) to the real climate of Galicia. the Data bases are those of the meteorological and ambient stations of organizations with wide nets in Galicia, and images of the Meteosat-6 satellite. (Author)

  4. Survey of existing literature in the field of shock-absorbing materials with a view to subsequent adaptation of plastic deformation codes. Phase 1

    Draulans, J.; Fabry, J.P.; Lafontaine, I.; Richel, H.; Guyette, M.

    1985-01-01

    Shock-absorbing materials and structures can be used as part of the transport container structure or of the truck equipment. An extensive survey of the literature has provided much information. Investigation has been made to define the required experimental procedures necessary to measure the misssing material properties. Three codes had been selected: EURDYN, MARC-CDC and SAMCEF. For code evaluation, a schematic container model has been considered to serve as a benchmark for the evaluation of plastic deformation. For the shock-calculation, the container falls from a height of 9 meters along the direction of its cylinder axis on an unyielded flat surface. The EURDYN computer code, has been selected first as it is especially designed to handle dynamic problems, preferably plastic ones. Indeed, EURDYN uses an explicit integration scheme versus time, which makes it quite efficient to run short deformation processes such as absorber collapses. The SAMCEF computer code could not readily calculate the benchmark, also a visco-plastic flow model has been added to it. The MARC computer code was supposed to be a candidate to run shock-calculation but extensive computing time and engineering efforts would be required, it was replaced by the PLEXUS code. The results obtained using the SAMCEF programme confirm those obtained with EURDYN. The PLEXUS results are in between. The proposed benchmark calculation is at the border of the capabilities of the most advanced computer codes for plastic-dynamic calculations. Indeed, a complex energy absorption process seems to take place in a narrow region, moving versus time, where very large shape inversions occur. That requires an accurate modelling of the system in the deformed regions and a skilful choice of the numerical parameters of the computer run. The three tested codes gave qualitatively consistent results and confirm some scarce experimental results

  5. Adaptation of penelope Monte Carlo code system to the absorbed dose metrology: characterization of high energy photon beams and calculations of reference dosimeter correction factors

    Mazurier, J.

    1999-01-01

    This thesis has been performed in the framework of national reference setting-up for absorbed dose in water and high energy photon beam provided with the SATURNE-43 medical accelerator of the BNM-LPRI (acronym for National Bureau of Metrology and Primary standard laboratory of ionising radiation). The aim of this work has been to develop and validate different user codes, based on PENELOPE Monte Carlo code system, to determine the photon beam characteristics and calculate the correction factors of reference dosimeters such as Fricke dosimeters and graphite calorimeter. In the first step, the developed user codes have permitted the influence study of different components constituting the irradiation head. Variance reduction techniques have been used to reduce the calculation time. The phase space has been calculated for 6, 12 and 25 MV at the output surface level of the accelerator head, then used for calculating energy spectra and dose distributions in the reference water phantom. Results obtained have been compared with experimental measurements. The second step has been devoted to develop an user code allowing calculation correction factors associated with both BNM-LPRI's graphite and Fricke dosimeters thanks to a correlated sampling method starting with energy spectra obtained in the first step. Then the calculated correction factors have been compared with experimental and calculated results obtained with the Monte Carlo EGS4 code system. The good agreement, between experimental and calculated results, leads to validate simulations performed with the PENELOPE code system. (author)

  6. Validation of the RALOC-mod.4 thermal-hydraulics code on evaporation transients in the Phebus containment

    Spitz, P.B.; Lemoine, F.; Tirini, S.

    1997-01-01

    IPSN (Nuclear Protection and Safety Institute) and GRS (Gesellschaft fur Anlagen und Reaktorsicherheit Schwertnergasse 1) are developing the ESCADRE-ASTEC systems of codes devoted to the prediction of the behaviour of water-cooled reactors during a severe accident. The RALOC-mod 4 code belongs to this system and is specifically devoted to containment thermal-hydraulics studies. IPSN has designed a Thermal Hydraulic Containment Test Program in support to the Phebus Fission Product Test Program/2/. Evaporation tests have been recently performed in the Phebus containment test facility. The objective of this work is to assess against these tests the capability of the RALOC -mod 4 code to capture the phenomena observed in these experiments and more particularly the evaporation heat transfer and wall heat transfers. (DM)

  7. Adaptation of computer code ALMOD 3.4 for safety analyses of Westighouse type NPPs and calculation of main feedwater loss

    Kordis, I.; Jerele, A.; Brajak, F.

    1986-01-01

    The paper presents theoretical foundations of ALMOD 3.4 code and modification done in order to adjust the model to westinghouse type NPP. test cases for verification of added modules functioning were done and loss of main feedwater (FW) transient at nominal power was analysed. (author)

  8. A novel pseudoderivative-based mutation operator for real-coded adaptive genetic algorithms [v2; ref status: indexed, http://f1000r.es/1td

    Maxinder S Kanwal; Avinash S Ramesh; Lauren A Huang

    2013-01-01

    Recent development of large databases, especially those in genetics and proteomics, is pushing the development of novel computational algorithms that implement rapid and accurate search strategies. One successful approach has been to use artificial intelligence and methods, including pattern recognition (e.g. neural networks) and optimization techniques (e.g. genetic algorithms). The focus of this paper is on optimizing the design of genetic algorithms by using an adaptive mutation rate that ...

  9. Coding Partitions

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  10. A Comparison of Outcomes Following Laparoscopic and Open Hysterectomy With or Without Lymphadenectomy for Presumed Early-Stage Endometrial Cancer: Results From the Medical Research Council ASTEC Trial.

    Kyrgiou, Maria; Swart, Anne-Marie; Qian, Wendi; Warwick, Jane

    2015-10-01

    Laparoscopic hysterectomy (LH) is increasingly used for the management of endometrial malignancy. Its benefits may be particularly pronounced as these women are more likely to be older or obese. The aim of this study was to determine whether outcomes for LH are comparable to the open hysterectomy (OH). This was a prospective cohort study nested within the multicenter ASTEC (A Study in the Treatment of Endometrial Cancer) randomized controlled trial (1998-2005). Women with presumed early endometrial cancer were included. Laparoscopic hysterectomy was compared with OH with or without systematic lymphadenectomy. Overall survival, time to first recurrence, complication rates, and surgical outcomes were the main outcome measures. Of 1408 women, 1309 (93%) received OH, and 99 (7%) had LH. LH was associated with longer operating time (median, LH 105 minutes [interquartile range (IQR), 60-150] vs OH 80 minutes [IQR, 60-95]; P < 0.001) but 50% shorter hospital stay (median, LH 4 days [IQR, 3-5] vs OH 6 days [IQR, 5-7]). The number of harvested lymph nodes was similar (median, LH 13 [IQR, 10-16] vs OH 12 [IQR, 11-13]; P = 0.67). LH had fewer intraoperative and postoperative adverse events (9% difference, LH 21% vs OH 30%; borderline significance; P = 0.07). The rate of conversion to laparotomy for the LH group was high (27%). The median follow-up was 37 months. After adjusting for significant prognostic factors, the hazard ratio for overall survival in those who underwent LH compared with those who underwent OH was 0.67 (95% confidence interval, 0.31-1.43) (P = 0.30). Laparoscopic hysterectomy for early endometrial cancer is safe. Although it requires longer operating time it is associated with shorter hospital stay and favorable morbidity profile. Further studies are required to assess the long-term safety.

  11. Body mass index does not influence post-treatment survival in early stage endometrial cancer: results from the MRC ASTEC trial.

    Crosbie, Emma J; Roberts, Chris; Qian, Wendi; Swart, Ann Marie; Kitchener, Henry C; Renehan, Andrew G

    2012-04-01

    Body mass index (BMI) is a major risk factor for endometrial cancer incidence but its impact on post-treatment survival is unclear. We investigated the relationships of BMI (categorised using the WHO definitions) with clinico-pathological characteristics and outcome in women treated within the MRC ASTEC randomised trial, which provides data from patients who received standardised allocated treatments and therefore reduces biases. The impact of BMI on both recurrence-free survival (RFS) and overall survival (OS) was analysed using the Cox regression models. An apriori framework of evaluating potential biases was explored. From 1408 participants, there were 1070 women with determinable BMI (median=29.1 kg/m(2)). Histological types were endometrioid (type 1) in 893 and non-endometrioid (type 2) in 146 women; the proportion of the latter decreasing with increasing BMI (8% versus 19% for obese III WHO category versus normal weight, p(trend)=0.003). For type 1 carcinomas, increasing BMI was associated with less aggressive histopathological features (depth of invasion, p=0.006; tumour grade, p=0.015). With a median follow-up of 34.3 months, there was no influence of BMI on RFS - adjusted HRs per 5 kg/m(2) were 0.98 (95% CI 0.86, 1.13) and 0.95 (0.74, 1.24), for type 1 and 2 carcinomas; and no influence on OS - adjusted HRs per 5 kg/m(2) were 0.96 (0.81, 1.14) and 0.92 (0.70, 1.23), respectively. These findings demonstrate an important principle: that an established link between an exposure (here, obesity) and increased incident cancer risk, does not necessarily translate into an inferior outcome following treatment for that cancer. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Simulation of the ACE L2 and ACE L5 MCCI experiment under dry surface conditions with ASTEC MEDICIS using an effective heat transfer model

    Agethen, Kathrin; Koch, Marco K. [Bochum Univ. (Germany). Reactor Simulation and Safety Group

    2013-07-01

    In a postulated severe accident the loss of cooling can lead to a melting of the core and to a failure of the vessel. The molten core material discharges to the containment cavity and interacts with the concrete basemat. The heat up of the concrete leads to the release of sparing gases (H{sub 2}, CO{sub 2}, SiO), which stir the pool und causes chemical reactions. Especially the metals (Zr, Fe, Ni, Cr) in the corium are oxidized und the exothermic energy is released to the melt, which raises the melt temperature further. The release of combustible gases (H{sub 2}, CO) and fission products to the containment atmosphere occurs as a result. In the long time (>10 h) containment failure and basemat penetration may occur, which can lead to fission product release to the environment. For further development and validation, simulations of experiments in which molten core concrete interaction (MCCI) is investigated, are necessary. In this work the new available effective heat transfer model in MEDICIS is used to calculate experiments of the ACE program, in which generic corium material is heated up and interacts with the concrete basemat. Here, especially the ACE L2 experiment with siliceous concrete and the ACE L5 experiment with limestone common sand (LCS) concrete will be presented. These experiments enable to analyze the heat transfer from the interior of the melt to the upper surface under dry conditions. Secondary the modeling in ASTEC version 2.p2 with the effective heat transfer module in MEDICIS is described. Results of MEDICIS simulations will be discussed by means of phenomena like ablation behavior and erosions depth, layer temperature and surface heat loss. Finally the issue of an effective heat transfer coefficient for the surface under dry conditions without top flooding is figured out. (orig.)

  13. A novel pseudoderivative-based mutation operator for real-coded adaptive genetic algorithms [v2; ref status: indexed, http://f1000r.es/1td

    Maxinder S Kanwal

    2013-11-01

    Full Text Available Recent development of large databases, especially those in genetics and proteomics, is pushing the development of novel computational algorithms that implement rapid and accurate search strategies. One successful approach has been to use artificial intelligence and methods, including pattern recognition (e.g. neural networks and optimization techniques (e.g. genetic algorithms. The focus of this paper is on optimizing the design of genetic algorithms by using an adaptive mutation rate that is derived from comparing the fitness values of successive generations. We propose a novel pseudoderivative-based mutation rate operator designed to allow a genetic algorithm to escape local optima and successfully continue to the global optimum. Once proven successful, this algorithm can be implemented to solve real problems in neurology and bioinformatics. As a first step towards this goal, we tested our algorithm on two 3-dimensional surfaces with multiple local optima, but only one global optimum, as well as on the N-queens problem, an applied problem in which the function that maps the curve is implicit. For all tests, the adaptive mutation rate allowed the genetic algorithm to find the global optimal solution, performing significantly better than other search methods, including genetic algorithms that implement fixed mutation rates.

  14. Adaptation of penelope Monte Carlo code system to the absorbed dose metrology: characterization of high energy photon beams and calculations of reference dosimeter correction factors; Adaptation du code Monte Carlo penelope pour la metrologie de la dose absorbee: caracterisation des faisceaux de photons X de haute energie et calcul de facteurs de correction de dosimetres de reference

    Mazurier, J

    1999-05-28

    This thesis has been performed in the framework of national reference setting-up for absorbed dose in water and high energy photon beam provided with the SATURNE-43 medical accelerator of the BNM-LPRI (acronym for National Bureau of Metrology and Primary standard laboratory of ionising radiation). The aim of this work has been to develop and validate different user codes, based on PENELOPE Monte Carlo code system, to determine the photon beam characteristics and calculate the correction factors of reference dosimeters such as Fricke dosimeters and graphite calorimeter. In the first step, the developed user codes have permitted the influence study of different components constituting the irradiation head. Variance reduction techniques have been used to reduce the calculation time. The phase space has been calculated for 6, 12 and 25 MV at the output surface level of the accelerator head, then used for calculating energy spectra and dose distributions in the reference water phantom. Results obtained have been compared with experimental measurements. The second step has been devoted to develop an user code allowing calculation correction factors associated with both BNM-LPRI's graphite and Fricke dosimeters thanks to a correlated sampling method starting with energy spectra obtained in the first step. Then the calculated correction factors have been compared with experimental and calculated results obtained with the Monte Carlo EGS4 code system. The good agreement, between experimental and calculated results, leads to validate simulations performed with the PENELOPE code system. (author)

  15. Liberamente tratto da... Storie, codici, tragitti, mediazioni tra letteratura e cinema Loosely adapted from… Stories, codes, travels, mediations between literature and cinema

    Donata Meneghelli

    2012-12-01

    Full Text Available

    Il saggio, a partire da una ricognizione della più recente letteratura critica sull’argomento, vuole proporre una riflessione critica sull’adattamento, sui confini che delimitano tale pratica e, in particolare, sulle questioni specifiche messe in gioco dall’adattamento cinematografico dei testi letterari, all’interno del più vasto campo dei rapporti tra letteratura e cinema. Tali questioni investono in primo luogo l’adattamento dei testi del canone, i ‘grandi’ testi della tradizione occidentale: è soprattutto in questo contesto che viene chiamata in causa la nozione di fedeltà. ‘Fedeltà’ è un termine apparentemente neutrale, che in realtà nasconde sempre un’implicita gerarchia e un atteggiamento difensivo nei confronti di una supposta superiorità assiologica della letteratura come forma ‘alta’, dotata di una dignità, di un’indipendenza creativa che mancherebbero al cinema in quanto arte popolare e di massa.

    Contro questa assiologia implicita già denunciata con grande anticipo da André Bazin, il saggio vuole mostrare la complessità e la ricchezza dei percorsi tra letteratura e cinema. Prendendo come esempio un film scarsamente conosciuto, quello che Mauro Bolognini realizza nel 1962 a partire dal romanzo Senilità, la riflessione si conclude esplorando alcune delle molteplici implicazioni rintracciabili nella nozione, proposta da Linda Hutcheon, di «identità instabile di un racconto».

    Starting from a survey of the most recent studies on the topic, the essay proposes a critical reflection on adaptation and the (possible borders of this cultural practice. It particularly focuses on the issues raised by cinematic adaptation of literary texts, a phenomenon which needs to be contextualized in the larger field of relationships between literature and cinema. Such issues, however, do not emerge in the same way for any text: they are especially urgent when we tackle the cinematic adaptation of

  16. Simulation of hydrogen deflagration experiment – Benchmark exercise with lumped-parameter codes

    Kljenak, Ivo, E-mail: ivo.kljenak@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Kuznetsov, Mikhail, E-mail: mike.kuznetsov@kit.edu [Karlsruhe Institute of Technology, Kaiserstraße 12, 76131 Karlsruhe (Germany); Kostka, Pal, E-mail: kostka@nubiki.hu [NUBIKI Nuclear Safety Research Institute, Konkoly-Thege Miklós út 29-33, 1121 Budapest (Hungary); Kubišova, Lubica, E-mail: lubica.kubisova@ujd.gov.sk [Nuclear Regulatory Authority of the Slovak Republic, Bajkalská 27, 82007 Bratislava (Slovakia); Maltsev, Mikhail, E-mail: maltsev_MB@aep.ru [JSC Atomenergoproekt, 1, st. Podolskykh Kursantov, Moscow (Russian Federation); Manzini, Giovanni, E-mail: giovanni.manzini@rse-web.it [Ricerca sul Sistema Energetico, Via Rubattino 54, 20134 Milano (Italy); Povilaitis, Mantas, E-mail: mantas.p@mail.lei.lt [Lithuania Energy Institute, Breslaujos g.3, 44403 Kaunas (Lithuania)

    2015-03-15

    Highlights: • Blind and open simulations of hydrogen combustion experiment in large-scale containment-like facility with different lumped-parameter codes. • Simulation of axial as well as radial flame propagation. • Confirmation of adequacy of lumped-parameter codes for safety analyses of actual nuclear power plants. - Abstract: An experiment on hydrogen deflagration (Upward Flame Propagation Experiment – UFPE) was proposed by the Jozef Stefan Institute (Slovenia) and performed in the HYKA A2 facility at the Karlsruhe Institute of Technology (Germany). The experimental results were used to organize a benchmark exercise for lumped-parameter codes. Six organizations (JSI, AEP, LEI, NUBIKI, RSE and UJD SR) participated in the benchmark exercise, using altogether four different computer codes: ANGAR, ASTEC, COCOSYS and ECART. Both blind and open simulations were performed. In general, all the codes provided satisfactory results of the pressure increase, whereas the results of the temperature show a wider dispersal. Concerning the flame axial and radial velocities, the results may be considered satisfactory, given the inherent simplification of the lumped-parameter description compared to the local instantaneous description.

  17. Simulation of hydrogen deflagration experiment – Benchmark exercise with lumped-parameter codes

    Kljenak, Ivo; Kuznetsov, Mikhail; Kostka, Pal; Kubišova, Lubica; Maltsev, Mikhail; Manzini, Giovanni; Povilaitis, Mantas

    2015-01-01

    Highlights: • Blind and open simulations of hydrogen combustion experiment in large-scale containment-like facility with different lumped-parameter codes. • Simulation of axial as well as radial flame propagation. • Confirmation of adequacy of lumped-parameter codes for safety analyses of actual nuclear power plants. - Abstract: An experiment on hydrogen deflagration (Upward Flame Propagation Experiment – UFPE) was proposed by the Jozef Stefan Institute (Slovenia) and performed in the HYKA A2 facility at the Karlsruhe Institute of Technology (Germany). The experimental results were used to organize a benchmark exercise for lumped-parameter codes. Six organizations (JSI, AEP, LEI, NUBIKI, RSE and UJD SR) participated in the benchmark exercise, using altogether four different computer codes: ANGAR, ASTEC, COCOSYS and ECART. Both blind and open simulations were performed. In general, all the codes provided satisfactory results of the pressure increase, whereas the results of the temperature show a wider dispersal. Concerning the flame axial and radial velocities, the results may be considered satisfactory, given the inherent simplification of the lumped-parameter description compared to the local instantaneous description

  18. High-resolution multi-code implementation of unsteady Navier-Stokes flow solver based on paralleled overset adaptive mesh refinement and high-order low-dissipation hybrid schemes

    Li, Gaohua; Fu, Xiang; Wang, Fuxin

    2017-10-01

    The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.

  19. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  20. The Development of Severe Accident Codes at IRSN and Their Application to Support the Safety Assessment of EPR

    Caroli, Cataldo; Bleyer, Alexandre; Bentaib, Ahmed; Chatelard, Patrick; Cranga, Michel; Van Dorsselaere, Jean-Pierre

    2006-01-01

    IRSN uses a two-tier approach for development of codes analysing the course of a hypothetical severe accident (SA) in a Pressurized Water Reactor (PWR): on one hand, the integral code ASTEC, jointly developed by IRSN and GRS, for fast-running and complete analysis of a sequence; on the other hand, detailed codes for best-estimate analysis of some phenomena such as ICARE/CATHARE, MC3D (for steam explosion), CROCO and TONUS. They have been extensively used to support the level 2 Probabilistic Safety Assessment of the 900 MWe PWR and, in general, for the safety analysis of the French PWR. In particular the codes ICARE/CATHARE, CROCO, MEDICIS (module of ASTEC) and TONUS are used to support the safety assessment of the European Pressurized Reactor (EPR). The ICARE/CATHARE code system has been developed for the detailed evaluation of SA consequences in a PWR primary system. It is composed of the coupling of the core degradation IRSN code ICARE2 and of the thermal-hydraulics French code CATHARE2. The CFD code CROCO describes the corium flow in the spreading compartment. Heat transfer to the surrounding atmosphere and to the basemat, leading to the possible formation of an upper and lower crust, basemat ablation and gas sparging through the flow are modelled. CROCO has been validated against a wide experimental basis, including the CORINE, KATS and VULCANO programs. MEDICIS simulates MCCI (Molten-Corium-Concrete-Interaction) using a lumped-parameter approach. Its models are being continuously improved through the interpretation of most MCCI experiments (OECD-CCI, ACE...). The TONUS code has been developed by IRSN in collaboration with CEA for the analysis of the hydrogen risk (both distribution and combustion) in the reactor containment. The analyses carried out to support the EPR safety assessment are based on a CFD formulation. At this purpose a low-Mach number multi-component Navier-Stokes solver is used to analyse the hydrogen distribution. Presence of air, steam and

  1. Speaking Code

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  2. High Order Modulation Protograph Codes

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  3. Coding Labour

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  4. Adaptive Mesh Refinement in CTH

    Crawford, David

    1999-01-01

    This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems

  5. The Spanish national health care-associated infection surveillance network (INCLIMECC): data summary January 1997 through December 2006 adapted to the new National Healthcare Safety Network Procedure-associated module codes.

    Pérez, Cristina Díaz-Agero; Rodela, Ana Robustillo; Monge Jodrá, Vincente

    2009-12-01

    In 1997, a national standardized surveillance system (designated INCLIMECC [Indicadores Clínicos de Mejora Continua de la Calidad]) was established in Spain for health care-associated infection (HAI) in surgery patients, based on the National Nosocomial Infection Surveillance (NNIS) system. In 2005, in its procedure-associated module, the National Healthcare Safety Network (NHSN) inherited the NNIS program for surveillance of HAI in surgery patients and reorganized all surgical procedures. INCLIMECC actively monitors all patients referred to the surgical ward of each participating hospital. We present a summary of the data collected from January 1997 to December 2006 adapted to the new NHSN procedures. Surgical site infection (SSI) rates are provided by operative procedure and NNIS risk index category. Further quality indicators reported are surgical complications, length of stay, antimicrobial prophylaxis, mortality, readmission because of infection or other complication, and revision surgery. Because the ICD-9-CM surgery procedure code is included in each patient's record, we were able to reorganize our database avoiding the loss of extensive information, as has occurred with other systems.

  6. Speech coding

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  7. Optimal codes as Tanner codes with cyclic component codes

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  8. Aztheca Code

    Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.

    2017-09-01

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  9. Vocable Code

    Soon, Winnie; Cox, Geoff

    2018-01-01

    a computational and poetic composition for two screens: on one of these, texts and voices are repeated and disrupted by mathematical chaos, together exploring the performativity of code and language; on the other, is a mix of a computer programming syntax and human language. In this sense queer code can...... be understood as both an object and subject of study that intervenes in the world’s ‘becoming' and how material bodies are produced via human and nonhuman practices. Through mixing the natural and computer language, this article presents a script in six parts from a performative lecture for two persons...

  10. NSURE code

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  11. Adaptive RD Optimized Hybrid Sound Coding

    Schijndel, N.H. van; Bensa, J.; Christensen, M.G.; Colomes, C.; Edler, B.; Heusdens, R.; Jensen, J.; Jensen, S.H.; Kleijn, W.B.; Kot, V.; Kövesi, B.; Lindblom, J.; Massaloux, D.; Niamut, O.A.; Nordén, F.; Plasberg, J.H.; Vafin, R.; Virette, D.; Wübbolt, O.

    2008-01-01

    Traditionally, sound codecs have been developed with a particular application in mind, their performance being optimized for specific types of input signals, such as speech or audio (music), and application constraints, such as low bit rate, high quality, or low delay. There is, however, an

  12. The Aster code; Code Aster

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  13. Coding Class

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...

  14. Uplink Coding

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.

  15. ANIMAL code

    Lindemuth, I.R.

    1979-01-01

    This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables

  16. Network Coding

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621 ...

  17. MCNP code

    Cramer, S.N.

    1984-01-01

    The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids

  18. Expander Codes

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.

  19. Atlas C++ Coding Standard Specification

    Albrand, S; Barberis, D; Bosman, M; Jones, B; Stavrianakou, M; Arnault, C; Candlin, D; Candlin, R; Franck, E; Hansl-Kozanecka, Traudl; Malon, D; Qian, S; Quarrie, D; Schaffer, R D

    2001-01-01

    This document defines the ATLAS C++ coding standard, that should be adhered to when writing C++ code. It has been adapted from the original "PST Coding Standard" document (http://pst.cern.ch/HandBookWorkBook/Handbook/Programming/programming.html) CERN-UCO/1999/207. The "ATLAS standard" comprises modifications, further justification and examples for some of the rules in the original PST document. All changes were discussed in the ATLAS Offline Software Quality Control Group and feedback from the collaboration was taken into account in the "current" version.

  20. The Redox Code.

    Jones, Dean P; Sies, Helmut

    2015-09-20

    The redox code is a set of principles that defines the positioning of the nicotinamide adenine dinucleotide (NAD, NADP) and thiol/disulfide and other redox systems as well as the thiol redox proteome in space and time in biological systems. The code is richly elaborated in an oxygen-dependent life, where activation/deactivation cycles involving O₂ and H₂O₂ contribute to spatiotemporal organization for differentiation, development, and adaptation to the environment. Disruption of this organizational structure during oxidative stress represents a fundamental mechanism in system failure and disease. Methodology in assessing components of the redox code under physiological conditions has progressed, permitting insight into spatiotemporal organization and allowing for identification of redox partners in redox proteomics and redox metabolomics. Complexity of redox networks and redox regulation is being revealed step by step, yet much still needs to be learned. Detailed knowledge of the molecular patterns generated from the principles of the redox code under defined physiological or pathological conditions in cells and organs will contribute to understanding the redox component in health and disease. Ultimately, there will be a scientific basis to a modern redox medicine.

  1. Panda code

    Altomare, S.; Minton, G.

    1975-02-01

    PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)

  2. MARS Code in Linux Environment

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  3. MARS Code in Linux Environment

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong

    2005-01-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  4. CANAL code

    Gara, P.; Martin, E.

    1983-01-01

    The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr

  5. The Nudo, Rollo, Melon codes and nodal correlations

    Perlado, J.M.; Aragones, J.M.; Minguez, E.; Pena, J.

    1975-01-01

    Analysis of nodal calculation and checking results by the reference reactor experimental data. Nudo code description, adapting experimental data to nodal calculations. Rollo, Melon codes as improvement in the cycle life calculations of albedos, mixing parameters and nodal correlations. (author)

  6. Adaptive Combined Source and Channel Decoding with Modulation ...

    In this paper, an adaptive system employing combined source and channel decoding with modulation is proposed for slow Rayleigh fading channels. Huffman code is used as the source code and Convolutional code is used for error control. The adaptive scheme employs a family of Convolutional codes of different rates ...

  7. Comparison of ASTECV1.3.2 and ASTECV2 results for QUENCH 12 test

    Stefanova, A.

    2010-01-01

    This paper presents a comparison of QUENCH 12 test calculated results with ASTECv1.3R2 and ASTECv2 computer codes. The test was performed to investigate the behavior of VVER fuel assemblies. This investigation is a part of the 6th and 7th framework programs of the EC supported ISTC program. The test facility is located at Forschungszentrum in Karlsruhe. The structure of the test facility allows experimental studies under transient and accident conditions. The ASTEC1.3R2 and ASTECv2 computer codes have been used to simulate the investigated test. The base line input model for ASTEC was provided from Forschungszentrum, Karlsruhe. During the preparation of QUENCH - 12 experiment, the input deck was adapted to new initial and boundary conditions. The comparison show good agreement between measured data and ASTEC calculated results. (author)

  8. From concatenated codes to graph codes

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  9. Adaptation and perceptual norms

    Webster, Michael A.; Yasuda, Maiko; Haber, Sara; Leonard, Deanne; Ballardini, Nicole

    2007-02-01

    We used adaptation to examine the relationship between perceptual norms--the stimuli observers describe as psychologically neutral, and response norms--the stimulus levels that leave visual sensitivity in a neutral or balanced state. Adapting to stimuli on opposite sides of a neutral point (e.g. redder or greener than white) biases appearance in opposite ways. Thus the adapting stimulus can be titrated to find the unique adapting level that does not bias appearance. We compared these response norms to subjectively defined neutral points both within the same observer (at different retinal eccentricities) and between observers. These comparisons were made for visual judgments of color, image focus, and human faces, stimuli that are very different and may depend on very different levels of processing, yet which share the property that for each there is a well defined and perceptually salient norm. In each case the adaptation aftereffects were consistent with an underlying sensitivity basis for the perceptual norm. Specifically, response norms were similar to and thus covaried with the perceptual norm, and under common adaptation differences between subjectively defined norms were reduced. These results are consistent with models of norm-based codes and suggest that these codes underlie an important link between visual coding and visual experience.

  10. Nevada Administrative Code for Special Education Programs.

    Nevada State Dept. of Education, Carson City. Special Education Branch.

    This document presents excerpts from Chapter 388 of the Nevada Administrative Code, which concerns definitions, eligibility, and programs for students who are disabled or gifted/talented. The first section gathers together 36 relevant definitions from the Code for such concepts as "adaptive behavior,""autism,""gifted and…

  11. Code Modernization of VPIC

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  12. Automatic code generation in practice

    Adam, Marian Sorin; Kuhrmann, Marco; Schultz, Ulrik Pagh

    2016-01-01

    -specific language to specify those requirements and to allow for generating a safety-enforcing layer of code, which is deployed to the robot. The paper at hand reports experiences in practically applying code generation to mobile robots. For two cases, we discuss how we addressed challenges, e.g., regarding weaving......Mobile robots often use a distributed architecture in which software components are deployed to heterogeneous hardware modules. Ensuring the consistency with the designed architecture is a complex task, notably if functional safety requirements have to be fulfilled. We propose to use a domain...... code generation into proprietary development environments and testing of manually written code. We find that a DSL based on the same conceptual model can be used across different kinds of hardware modules, but a significant adaptation effort is required in practical scenarios involving different kinds...

  13. Improvements to SOIL: An Eulerian hydrodynamics code

    Davis, C.G.

    1988-04-01

    Possible improvements to SOIL, an Eulerian hydrodynamics code that can do coupled radiation diffusion and strength of materials, are presented in this report. Our research is based on the inspection of other Eulerian codes and theoretical reports on hydrodynamics. Several conclusions from the present study suggest that some improvements are in order, such as second-order advection, adaptive meshes, and speedup of the code by vectorization and/or multitasking. 29 refs., 2 figs

  14. ComboCoding: Combined intra-/inter-flow network coding for TCP over disruptive MANETs

    Chien-Chia Chen

    2011-07-01

    Full Text Available TCP over wireless networks is challenging due to random losses and ACK interference. Although network coding schemes have been proposed to improve TCP robustness against extreme random losses, a critical problem still remains of DATA–ACK interference. To address this issue, we use inter-flow coding between DATA and ACK to reduce the number of transmissions among nodes. In addition, we also utilize a “pipeline” random linear coding scheme with adaptive redundancy to overcome high packet loss over unreliable links. The resulting coding scheme, ComboCoding, combines intra-flow and inter-flow coding to provide robust TCP transmission in disruptive wireless networks. The main contributions of our scheme are twofold; the efficient combination of random linear coding and XOR coding on bi-directional streams (DATA and ACK, and the novel redundancy control scheme that adapts to time-varying and space-varying link loss. The adaptive ComboCoding was tested on a variable hop string topology with unstable links and on a multipath MANET with dynamic topology. Simulation results show that TCP with ComboCoding delivers higher throughput than with other coding options in high loss and mobile scenarios, while introducing minimal overhead in normal operation.

  15. Rate Adaptive OFDMA Communication Systems

    Abdelhakim, M.M.M.

    2009-01-01

    Due to the varying nature of the wireless channels, adapting the transmission parameters, such as code rate, modulation order and power, in response to the channel variations provides a significant improvement in the system performance. In the OFDM systems, Per-Frame adaptation (PFA) can be employed where the transmission variables are fixed over a given frame and may change from one frame to the other. Subband (tile) loading offers more degrees of adaptation such that each group of carriers (subband) uses the same transmission parameters and different subbands may use different parameters. Changing the code rate for each tile in the same frame, results in transmitting multiple codewords (MCWs) for a single frame. In this thesis a scheme is proposed for adaptively changing the code rate of coded OFDMA systems via changing the puncturing rate within a single codeword (SCW). In the proposed structure, the data is encoded with the lowest available code rate then it is divided among the different tiles where it is punctured adaptively based on some measure of the channel quality for each tile. The proposed scheme is compared against using multiple codewords (MCWs) where the different code rates for the tiles are obtained using separate encoding processes. For bit interleaved coded modulation architecture two novel interleaving methods are proposed, namely the puncturing dependant interleaver (PDI) and interleaved puncturing (IntP), which provide larger interleaving depth. In the PDI method the coded bits with the same rate over different tiles are grouped for interleaving. In IntP structure the interleaving is performed prior to puncturing. The performance of the adaptive puncturing technique is investigated under constant bit rate constraint and variable bit rate. Two different adaptive modulation and coding (AMC) selection methods are examined for variable bit rate adaptive system. The first is a recursive scheme that operates directly on the SNR whereas the second

  16. Automatic coding method of the ACR Code

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  17. Error-correction coding

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  18. Adaptation and implementation of the TRACE code for transient analysis in designs lead cooled fast reactors; Adaptacion y aplicacion del codigo TRACE para el analisis de transitorios en disenos de reactores rapidos refrigerados por plomo

    Lazaro, A.; Ammirabile, L.; Martorell, S.

    2015-07-01

    Lead-Cooled Fast Reactor (LFR) has been identified as one of promising future reactor concepts in the technology road map of the Generation IVC International Forum (GIF)as well as in the Deployment Strategy of the European Sustainable Nuclear Industrial Initiative (ESNII), both aiming at improved sustainability, enhanced safety, economic competitiveness, and proliferation resistance. This new nuclear reactor concept requires the development of computational tools to be applied in design and safety assessments to confirm improved inherent and passive safety features of this design. One approach to this issue is to modify the current computational codes developed for the simulation of Light Water Reactors towards their applicability for the new designs. This paper reports on the performed modifications of the TRACE system code to make it applicable to LFR safety assessments. The capabilities of the modified code are demonstrated on series of benchmark exercises performed versus other safety analysis codes. (Author)

  19. Dynamic Shannon Coding

    Gagie, Travis

    2005-01-01

    We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.

  20. Fundamentals of convolutional coding

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  1. Codes Over Hyperfields

    Atamewoue Surdive

    2017-12-01

    Full Text Available In this paper, we define linear codes and cyclic codes over a finite Krasner hyperfield and we characterize these codes by their generator matrices and parity check matrices. We also demonstrate that codes over finite Krasner hyperfields are more interesting for code theory than codes over classical finite fields.

  2. Enhanced attention amplifies face adaptation.

    Rhodes, Gillian; Jeffery, Linda; Evangelista, Emma; Ewing, Louise; Peters, Marianne; Taylor, Libby

    2011-08-15

    Perceptual adaptation not only produces striking perceptual aftereffects, but also enhances coding efficiency and discrimination by calibrating coding mechanisms to prevailing inputs. Attention to simple stimuli increases adaptation, potentially enhancing its functional benefits. Here we show that attention also increases adaptation to faces. In Experiment 1, face identity aftereffects increased when attention to adapting faces was increased using a change detection task. In Experiment 2, figural (distortion) face aftereffects increased when attention was increased using a snap game (detecting immediate repeats) during adaptation. Both were large effects. Contributions of low-level adaptation were reduced using free viewing (both experiments) and a size change between adapt and test faces (Experiment 2). We suggest that attention may enhance adaptation throughout the entire cortical visual pathway, with functional benefits well beyond the immediate advantages of selective processing of potentially important stimuli. These results highlight the potential to facilitate adaptive updating of face-coding mechanisms by strategic deployment of attentional resources. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Vector Network Coding Algorithms

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  4. Visual Coding of Human Bodies: Perceptual Aftereffects Reveal Norm-Based, Opponent Coding of Body Identity

    Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J.

    2013-01-01

    Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this…

  5. Code stroke in Asturias.

    Benavente, L; Villanueva, M J; Vega, P; Casado, I; Vidal, J A; Castaño, B; Amorín, M; de la Vega, V; Santos, H; Trigo, A; Gómez, M B; Larrosa, D; Temprano, T; González, M; Murias, E; Calleja, S

    2016-04-01

    Intravenous thrombolysis with alteplase is an effective treatment for ischaemic stroke when applied during the first 4.5 hours, but less than 15% of patients have access to this technique. Mechanical thrombectomy is more frequently able to recanalise proximal occlusions in large vessels, but the infrastructure it requires makes it even less available. We describe the implementation of code stroke in Asturias, as well as the process of adapting various existing resources for urgent stroke care in the region. By considering these resources, and the demographic and geographic circumstances of our region, we examine ways of reorganising the code stroke protocol that would optimise treatment times and provide the most appropriate treatment for each patient. We distributed the 8 health districts in Asturias so as to permit referral of candidates for reperfusion therapies to either of the 2 hospitals with 24-hour stroke units and on-call neurologists and providing IV fibrinolysis. Hospitals were assigned according to proximity and stroke severity; the most severe cases were immediately referred to the hospital with on-call interventional neurology care. Patient triage was provided by pre-hospital emergency services according to the NIHSS score. Modifications to code stroke in Asturias have allowed us to apply reperfusion therapies with good results, while emphasising equitable care and managing the severity-time ratio to offer the best and safest treatment for each patient as soon as possible. Copyright © 2015 Sociedad Española de Neurología. Published by Elsevier España, S.L.U. All rights reserved.

  6. Distributed Video Coding: Iterative Improvements

    Luong, Huynh Van

    Nowadays, emerging applications such as wireless visual sensor networks and wireless video surveillance are requiring lightweight video encoding with high coding efficiency and error-resilience. Distributed Video Coding (DVC) is a new coding paradigm which exploits the source statistics...... and noise modeling and also learn from the previous decoded Wyner-Ziv (WZ) frames, side information and noise learning (SING) is proposed. The SING scheme introduces an optical flow technique to compensate the weaknesses of the block based SI generation and also utilizes clustering of DCT blocks to capture...... cross band correlation and increase local adaptivity in noise modeling. During decoding, the updated information is used to iteratively reestimate the motion and reconstruction in the proposed motion and reconstruction reestimation (MORE) scheme. The MORE scheme not only reestimates the motion vectors...

  7. Homological stabilizer codes

    Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  8. Adaptation and implementation of the TRACE code for transient analysis on designs of cooled lead fast reactors; Adaptacion y aplicacion del codigo TRACE para el analisis de transitorios en disenos de reactores rapidos refrigerados por plomo

    Lazaro, A.; Ammirabile, L.; Martorell, S.

    2014-07-01

    The article describes the changes implemented in the TRACE code to include thermodynamic tables of liquid lead drawn from experimental results. He then explains the process for developing a thermohydraulic model for the prototype ALFRED and analysis of a selection of representative transient conducted within the framework of international research projects. The study demonstrates the applicability of TRACE code to simulate designs of cooled lead fast reactors and exposes the high safety margins are there in this technology to accommodate the most severe transients identified in their security study. (Author)

  9. Diagnostic Coding for Epilepsy.

    Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R

    2016-02-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  10. Coding of Neuroinfectious Diseases.

    Barkley, Gregory L

    2015-12-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  11. Mistranslation: from adaptations to applications.

    Hoffman, Kyle S; O'Donoghue, Patrick; Brandl, Christopher J

    2017-11-01

    The conservation of the genetic code indicates that there was a single origin, but like all genetic material, the cell's interpretation of the code is subject to evolutionary pressure. Single nucleotide variations in tRNA sequences can modulate codon assignments by altering codon-anticodon pairing or tRNA charging. Either can increase translation errors and even change the code. The frozen accident hypothesis argued that changes to the code would destabilize the proteome and reduce fitness. In studies of model organisms, mistranslation often acts as an adaptive response. These studies reveal evolutionary conserved mechanisms to maintain proteostasis even during high rates of mistranslation. This review discusses the evolutionary basis of altered genetic codes, how mistranslation is identified, and how deviations to the genetic code are exploited. We revisit early discoveries of genetic code deviations and provide examples of adaptive mistranslation events in nature. Lastly, we highlight innovations in synthetic biology to expand the genetic code. The genetic code is still evolving. Mistranslation increases proteomic diversity that enables cells to survive stress conditions or suppress a deleterious allele. Genetic code variants have been identified by genome and metagenome sequence analyses, suppressor genetics, and biochemical characterization. Understanding the mechanisms of translation and genetic code deviations enables the design of new codes to produce novel proteins. Engineering the translation machinery and expanding the genetic code to incorporate non-canonical amino acids are valuable tools in synthetic biology that are impacting biomedical research. This article is part of a Special Issue entitled "Biochemistry of Synthetic Biology - Recent Developments" Guest Editor: Dr. Ilka Heinemann and Dr. Patrick O'Donoghue. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. MEDICIS(ASTEC-V2) sensitivity calculations for investigation of the crust formation in VB-U5 and VB-U6 VULCANO tests

    Stefanova, A.; Grudev, P.; Gencheva, R.

    2011-01-01

    This paper presents the results from sensitivity calculations made with MEDICIS(ASTECv2) for investigation of the crust formation during the Molten Corium-Concrete Interaction(MCCI) in VB-U5 and VB-U6 VULCANO tests. All calculations are made with MEDICIS computer code. The main goal of these analyses is to assess how the assumptions for crust formation or not formation influence over the concrete ablation. Three calculations have been done for each one of the experiments with different crust thickness and lock of crust formation at the bottom, side and upper surface. (authors)

  13. Vector Network Coding

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  14. Generalized concatenated quantum codes

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei

    2009-01-01

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  15. Rateless feedback codes

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  16. The LEONAR code: a new tool for PSA Level 2 analyses

    Tourniaire, B; Spindler, B.; Ratel, G.; Seiler, J.M.; Iooss, B.; Marques, M.; Gaudier, F.; Greffier, G.

    2011-01-01

    The LEONAR code, complementary to integral codes such as MAAP or ASTEC, is a new severe accident simulation tool which can calculate easily 1000 late phase reactor situations within a few hours and provide a statistical evaluation of the situations. LEONAR can be used for the analysis of the impact on the failure probabilities of specific Severe Accident Management measures (for instance: water injection) or design modifications (for instance: pressure vessel flooding or dedicated reactor pit flooding), or to focus the research effort on key phenomena. The starting conditions for LEONAR are a set of core melting situations that are separately calculated from a core degradation code (such as MAAP, which is used by EDF). LEONAR describes the core melt evolution after flooding in the core, the corium relocation in the lower head (under dry and wet conditions), the evolution of corium in the lower head including the effect of flooding, the vessel failure, corium relocation in the reactor cavity, interaction between corium and basemat concrete, possible corium spreading in the neighbour rooms, on the containment floor. Scenario events as well as specific physical model parameters are characterised by a probability density distribution. The probabilistic evaluation is performed by URANIE that is coupled to the physical calculations. The calculation results are treated in a statistical way in order to provide easily usable information. This tool can be used to identify the main parameters that influence corium coolability for severe accident late phases. It is aimed to replace efficiently PIRT exercises. An important impact of such a tool is that it can be used to make a demonstration that the probability of basemat failure can be significantly reduced by coupling a number of separate severe accident management measures or design modifications despite each separate measure is not sufficient by itself to avoid the failure. (authors)

  17. Advanced video coding systems

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  18. Coding for dummies

    Abraham, Nikhil

    2015-01-01

    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  19. Quantum computing with Majorana fermion codes

    Litinski, Daniel; von Oppen, Felix

    2018-05-01

    We establish a unified framework for Majorana-based fault-tolerant quantum computation with Majorana surface codes and Majorana color codes. All logical Clifford gates are implemented with zero-time overhead. This is done by introducing a protocol for Pauli product measurements with tetrons and hexons which only requires local 4-Majorana parity measurements. An analogous protocol is used in the fault-tolerant setting, where tetrons and hexons are replaced by Majorana surface code patches, and parity measurements are replaced by lattice surgery, still only requiring local few-Majorana parity measurements. To this end, we discuss twist defects in Majorana fermion surface codes and adapt the technique of twist-based lattice surgery to fermionic codes. Moreover, we propose a family of codes that we refer to as Majorana color codes, which are obtained by concatenating Majorana surface codes with small Majorana fermion codes. Majorana surface and color codes can be used to decrease the space overhead and stabilizer weight compared to their bosonic counterparts.

  20. The code of ethics for nurses.

    Zahedi, F; Sanjari, M; Aala, M; Peymani, M; Aramesh, K; Parsapour, A; Maddah, Ss Bagher; Cheraghi, Ma; Mirzabeigi, Gh; Larijani, B; Dastgerdi, M Vahid

    2013-01-01

    Nurses are ever-increasingly confronted with complex concerns in their practice. Codes of ethics are fundamental guidance for nursing as many other professions. Although there are authentic international codes of ethics for nurses, the national code would be the additional assistance provided for clinical nurses in their complex roles in care of patients, education, research and management of some parts of health care system in the country. A national code can provide nurses with culturally-adapted guidance and help them to make ethical decisions more closely to the Iranian-Islamic background. Given the general acknowledgement of the need, the National Code of Ethics for Nurses was compiled as a joint project (2009-2011). The Code was approved by the Health Policy Council of the Ministry of Health and Medical Education and communicated to all universities, healthcare centers, hospitals and research centers early in 2011. The focus of this article is on the course of action through which the Code was compiled, amended and approved. The main concepts of the code will be also presented here. No doubt, development of the codes should be considered as an ongoing process. This is an overall responsibility to keep the codes current, updated with the new progresses of science and emerging challenges, and pertinent to the nursing practice.

  1. Information preserving coding for multispectral data

    Duan, J. R.; Wintz, P. A.

    1973-01-01

    A general formulation of the data compression system is presented. A method of instantaneous expansion of quantization levels by reserving two codewords in the codebook to perform a folding over in quantization is implemented for error free coding of data with incomplete knowledge of the probability density function. Results for simple DPCM with folding and an adaptive transform coding technique followed by a DPCM technique are compared using ERTS-1 data.

  2. A Spanish version for the new ERA-EDTA coding system for primary renal disease

    Óscar Zurriaga

    2015-07-01

    Conclusions: Translation and adaptation into Spanish represent an improvement that will help to introduce and use the new coding system for PKD, as it can help reducing the time devoted to coding and also the period of adaptation of health workers to the new codes.

  3. Adaptive Education.

    Anderson, Lorin W.

    1979-01-01

    Schools have devised several ways to adapt instruction to a wide variety of student abilities and needs. Judged by criteria for what adaptive education should be, most learning for mastery programs look good. (Author/JM)

  4. Discussion on LDPC Codes and Uplink Coding

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  5. Analysis of top flooding during molten corium concrete interaction (MCCI) with the code MEDICIS using a simplified approach for the combined effect of crust formation and boiling

    Spengler, C.

    2012-01-01

    The objective of this work is to provide adequate models in the code MEDICIS for the molten corium concrete interaction (MCCI) phase in a severe accident. Here, the multidimensional distribution of heat fluxes from the molten pool of corium to the sidewall and bottom wall concrete structures in the reactor pit and to the top surface is a persistent subject of international research activities on MCCI. In recent experi-ments with internally heated oxide melts it was observed that the erosion progress may be anisotropic - with an apparent preference of the sidewall compared to the bottom wall - or isotropic, in dependence of the type of concrete with which the cori-um interacts. The lumped parameter code MEDICIS, which is part of the severe accident codes ASTEC and COCOSYS - developed and used at IRSN/GRS respectively GRS for the latter one -, is dedicated to simulate the phenomenology during MCCI. In this work a simplified modelling in MEDICIS is tested to account for the observed ablation behaviour during MCCI, with focus on the heat transfer to the top surface under flooded conditions. This approach is assessed by calculations for selected MCCI experiments involving the top flooding of the melt. (orig.)

  6. Locally orderless registration code

    2012-01-01

    This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....

  7. Decoding Codes on Graphs

    Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.

  8. Manually operated coded switch

    Barnette, J.H.

    1978-01-01

    The disclosure related to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made

  9. Coding in Muscle Disease.

    Jones, Lyell K; Ney, John P

    2016-12-01

    Accurate coding is critically important for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of administrative coding for patients with muscle disease and includes a case-based review of diagnostic and Evaluation and Management (E/M) coding principles in patients with myopathy. Procedural coding for electrodiagnostic studies and neuromuscular ultrasound is also reviewed.

  10. QR Codes 101

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  11. Coding with partially hidden Markov models

    Forchhammer, Søren; Rissanen, J.

    1995-01-01

    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  12. Adaptive Lighting

    Petersen, Kjell Yngve; Søndergaard, Karin; Kongshaug, Jesper

    2015-01-01

    Adaptive Lighting Adaptive lighting is based on a partial automation of the possibilities to adjust the colour tone and brightness levels of light in order to adapt to people’s needs and desires. IT support is key to the technical developments that afford adaptive control systems. The possibilities...... offered by adaptive lighting control are created by the ways that the system components, the network and data flow can be coordinated through software so that the dynamic variations are controlled in ways that meaningfully adapt according to people’s situations and design intentions. This book discusses...... differently into an architectural body. We also examine what might occur when light is dynamic and able to change colour, intensity and direction, and when it is adaptive and can be brought into interaction with its surroundings. In short, what happens to an architectural space when artificial lighting ceases...

  13. Research on pre-processing of QR Code

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  14. Adaptive Evolution Coupled with Retrotransposon Exaptation Allowed for the Generation of a Human-Protein-Specific Coding Gene That Promotes Cancer Cell Proliferation and Metastasis in Both Haematological Malignancies and Solid Tumours: The Extraordinary Case of MYEOV Gene

    Spyros I. Papamichos

    2015-01-01

    Full Text Available The incidence of cancer in human is high as compared to chimpanzee. However previous analysis has documented that numerous human cancer-related genes are highly conserved in chimpanzee. Till date whether human genome includes species-specific cancer-related genes that could potentially contribute to a higher cancer susceptibility remains obscure. This study focuses on MYEOV, an oncogene encoding for two protein isoforms, reported as causally involved in promoting cancer cell proliferation and metastasis in both haematological malignancies and solid tumours. First we document, via stringent in silico analysis, that MYEOV arose de novo in Catarrhini. We show that MYEOV short-isoform start codon was evolutionarily acquired after Catarrhini/Platyrrhini divergence. Throughout the course of Catarrhini evolution MYEOV acquired a gradually elongated translatable open reading frame (ORF, a gradually shortened translation-regulatory upstream ORF, and alternatively spliced mRNA variants. A point mutation introduced in human allowed for the acquisition of MYEOV long-isoform start codon. Second, we demonstrate the precious impact of exonized transposable elements on the creation of MYEOV gene structure. Third, we highlight that the initial part of MYEOV long-isoform coding DNA sequence was under positive selection pressure during Catarrhini evolution. MYEOV represents a Primate Orphan Gene that acquired, via ORF expansion, a human-protein-specific coding potential.

  15. Visual communication with retinex coding.

    Huck, F O; Fales, C L; Davis, R E; Alter-Gartenberg, R

    2000-04-10

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  16. Visual Communication with Retinex Coding

    Huck, Friedrich O.; Fales, Carl L.; Davis, Richard E.; Alter-Gartenberg, Rachel

    2000-04-01

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  17. Latest improvements on TRACPWR six-equations thermohydraulic code

    Rivero, N.; Batuecas, T.; Martinez, R.; Munoz, J.; Lenhardt, G.; Serrano, P.

    1999-01-01

    The paper presents the latest improvements on TRACPWR aimed at adapting the code to present trends on computer platforms, architectures and training requirements as well as extending the scope of the code itself and its applicability to other technologies different from Westinghouse PWR one. Firstly major features of TRACPWR as best estimate and real time simulation code are summed, then the areas where TRACPWR is being improved are presented. These areas comprising: (1) Architecture: integrating TRACPWR and RELAP5 codes, (2) Code scope enhancement: modelling the Mid-Loop operation, (3) Code speed-up: applying parallelization techniques, (4) Code platform downswing: porting to Windows N1 platform, (5) On-line performance: allowing simulation initialisation from a Plant Process Computer, and (6) Code scope extension: using the code for modelling VVER and PHWR technology. (author)

  18. Codes and curves

    Walker, Judy L

    2000-01-01

    When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...

  19. Thermodynamic data bases and calculation code adapted to the modelling of molten core concrete interaction (M.C.C.I.) phenomena, developed jointly by Thermodata and the ''Institut de Protection et de Surete Nucleaire'' (France)

    Cenerino, G.

    1992-01-01

    An oxide data base containing the main five oxides Al 2 O 3 , CaO, SiO 2 , UO 2 and ZrO 2 of a corium obtained if the reactor core melts through the vessel and slumps into the concrete reactor cavity is developed using the GEMINI2 code. This oxide quinary system study takes into account physical realistic thermodynamical modeling of all the possible equilibrium species of the system. Two applications are presented: the determination of liquidus and solidus temperatures of some selected mixtures of the quinary system (core: UO 2 -ZrO 2 and concrete: Al 2 O 3 -CaO-SiO 2 ), a better modeling of the fission products release by vaporization from the corium. (A.B.). 5 refs., 2 figs

  20. Novel polymorphisms in UTR and coding region of inducible heat shock protein 70.1 gene in tropically adapted Indian zebu cattle (Bos indicus) and riverine buffalo (Bubalus bubalis).

    Sodhi, M; Mukesh, M; Kishore, A; Mishra, B P; Kataria, R S; Joshi, B K

    2013-09-25

    Due to evolutionary divergence, cattle (taurine, and indicine) and buffalo are speculated to have different responses to heat stress condition. Variation in candidate genes associated with a heat-shock response may provide an insight into the dissimilarity and suggest targets for intervention. The present work was undertaken to characterize one of the inducible heat shock protein genes promoter and coding regions in diverse breeds of Indian zebu cattle and buffaloes. The genomic DNA from a panel of 117 unrelated animals representing 14 diversified native cattle breeds and 6 buffalo breeds were utilized to determine the complete sequence and gene diversity of HSP70.1 gene. The coding region of HSP70.1 gene in Indian zebu cattle, Bos taurus and buffalo was similar in length (1,926 bp) encoding a HSP70 protein of 641 amino acids with a calculated molecular weight (Mw) of 70.26 kDa. However buffalo had a longer 5' and 3' untranslated region (UTR) of 204 and 293 nucleotides respectively, in comparison to Indian zebu cattle and Bos taurus wherein length of 5' and 3'-UTR was 172 and 286 nucleotides, respectively. The increased length of buffalo HSP70.1 gene compared to indicine and taurine gene was due to two insertions each in 5' and 3'-UTR. Comparative sequence analysis of cattle (taurine and indicine) and buffalo HSP70.1 gene revealed a total of 54 gene variations (50 SNPs and 4 INDELs) among the three species in the HSP70.1 gene. The minor allele frequencies of these nucleotide variations varied from 0.03 to 0.5 with an average of 0.26. Among the 14 B. indicus cattle breeds studied, a total of 19 polymorphic sites were identified: 4 in the 5'-UTR and 15 in the coding region (of these 2 were non-synonymous). Analysis among buffalo breeds revealed 15 SNPs throughout the gene: 6 at the 5' flanking region and 9 in the coding region. In bubaline 5'-UTR, 2 additional putative transcription factor binding sites (Elk-1 and C-Re1) were identified, other than three common sites

  1. On locality of Generalized Reed-Muller codes over the broadcast erasure channel

    Alloum, Amira; Lin, Sian Jheng; Al-Naffouri, Tareq Y.

    2016-01-01

    , and more specifically at the application layer where Rateless, LDPC, Reed Slomon codes and network coding schemes have been extensively studied, optimized and standardized in the past. Beyond reusing, extending or adapting existing application layer packet

  2. Uniform Circular Antenna Array Applications in Coded DS-CDMA Mobile Communication Systems

    Seow, Tian

    2003-01-01

    ...) has greatly increased. This thesis examines the use of an equally spaced circular adaptive antenna array at the mobile station for a typical coded direct sequence code division multiple access (DS-CDMA...

  3. ADAPT Dataset

    National Aeronautics and Space Administration — Advanced Diagnostics and Prognostics Testbed (ADAPT) Project Lead: Scott Poll Subject Fault diagnosis in electrical power systems Description The Advanced...

  4. ADAM adaptation and mitigation strategies: supporting European climate policy. Deliverable D3 of work package M1 (code D-M1.3). ADAM 2-degree scenario for Europe - policies and impacts

    Schade, Wolfgang; Jochem, Eberhard; Barker, Terry [and others

    2009-07-31

    ADAM research identifies and appraises existing and new policy options that can contribute to different combinations of adaptation and mitigation strategies. These options address the demands a changing climate will place on protecting citizens and valuable ecosystems - i.e., adaptation - as well as addressing the necessity to restrain/control humankind's perturbation to global climate to a desirable level - i.e., mitigation. The work package Mitigation 1 (Ml) has the core objective to simulate mitigation options and their related costs for Europe until 2050 and 2100 respectively. The focus of this deliverable is on the period 2005 to 2050. The long-term period until 2100 is covered in the previous deliverable D2, applying the POLES model for this time horizon. The analysis constitutes basically a techno-economic analysis. Depending on the sector analyzed it is either directly combined with a policy analysis (e.g. in the transport sector, renewables sector) or the policy analysis is performed qualitatively as a subsequent and independent step after the techno-economic analysis is completed (e.g. in the residential and service sectors). The book includes the following chapters: scenarios and macroeconomic assumptions; methodological issues analyzing mitigation options; the integrated global energy model POLES and its projections for the reference and 2 deg C scenarios; forest and basic materials sector; residential sector in Europe; the service (tertiary) and the primary sectors in Europe; basic products and other manufacturing industry sectors; transport sectors in Europe; renewable sector in Europe; conversion sector in Europe; syntheses and sectoral analysis in Europe; macroeconomic impacts of climate policy in the EU; the effects of the financial crisis on baseline simulations with implications for climate policy modeling: an analysis using the global model E3MG 2008-2012; conclusions and policy recommendations.

  5. ADAM adaptation and mitigation strategies: supporting European climate policy. Deliverable D3 of work package M1 (code D-M1.3). ADAM 2-degree scenario for Europe - policies and impacts

    Schade, Wolfgang; Jochem, Eberhard; Barker, Terry (and others)

    2009-07-31

    ADAM research identifies and appraises existing and new policy options that can contribute to different combinations of adaptation and mitigation strategies. These options address the demands a changing climate will place on protecting citizens and valuable ecosystems - i.e., adaptation - as well as addressing the necessity to restrain/control humankind's perturbation to global climate to a desirable level - i.e., mitigation. The work package Mitigation 1 (Ml) has the core objective to simulate mitigation options and their related costs for Europe until 2050 and 2100 respectively. The focus of this deliverable is on the period 2005 to 2050. The long-term period until 2100 is covered in the previous deliverable D2, applying the POLES model for this time horizon. The analysis constitutes basically a techno-economic analysis. Depending on the sector analyzed it is either directly combined with a policy analysis (e.g. in the transport sector, renewables sector) or the policy analysis is performed qualitatively as a subsequent and independent step after the techno-economic analysis is completed (e.g. in the residential and service sectors). The book includes the following chapters: scenarios and macroeconomic assumptions; methodological issues analyzing mitigation options; the integrated global energy model POLES and its projections for the reference and 2 deg C scenarios; forest and basic materials sector; residential sector in Europe; the service (tertiary) and the primary sectors in Europe; basic products and other manufacturing industry sectors; transport sectors in Europe; renewable sector in Europe; conversion sector in Europe; syntheses and sectoral analysis in Europe; macroeconomic impacts of climate policy in the EU; the effects of the financial crisis on baseline simulations with implications for climate policy modeling: an analysis using the global model E3MG 2008-2012; conclusions and policy recommendations.

  6. The GnRH receptor and the response of gonadotrope cells to GnRH pulse frequency code. A story of an atypical adaptation of cell function relying on a lack of receptor homologous desensitization.

    Christian Bleux

    2010-01-01

    Full Text Available Brain control of the reproductive system is mediated through hypothalamic gonadotropin-releasing hormone (GnRH which activates specific receptors (GnRHR present at the surface of the pituitary gonadotropes to trigger secretion of the two gonadotropins LH and FSH. A unique feature of this system is the high dependence on the secretion mode of GnRH, which is basically pulsatile but undergoes considerable fluctuations in pulse frequency pattern in response to endogenous or external factors. How the physiological fluctuations of GnRH secretion that orchestrate normal reproduction are decoded by the gonadotrope cell machinery to ultimately control gonadotropin release and/or subunit gene transcription has been the subject of intensive studies during the past decades. Surprisingly, the mammalian GnRHR is unique among G protein-coupled receptor family as it lacks the carboxy-terminal tail usually involved in classical endocytotic process. Accordingly, it does not desensitize properly and internalizes very poorly. Both this atypical intrinsic property and post-receptor events may thus contribute to decode the GnRH signal. This includes the participation of a network of signaling pathways that differently respond to GnRH together with a growing amount of genes differentially sensitive to pulse frequency. Among these are two pairs of genes, the transcription factors EGR-1 and NAB, and the regulatory factors activin and follistatin, that function as intracellular autoregulatory feedback loops controlling respectively LHbeta and FSHbeta gene expression and hence, LH and FSH synthesis. Pituitary gonadotropes thus represent a unique model of cells functionally adapted to respond to a considerably fluctuating neuroendocrine stimulation, from short individual pulses to sustained GnRH as observed at the proestrus of ovarian cycle. Altogether, the data emphasize the adaptative reciprocal complementarity of hypothalamic GnRH neurones and pituitary gonadotropes to

  7. The materiality of Code

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  8. Coding for optical channels

    Djordjevic, Ivan; Vasic, Bane

    2010-01-01

    This unique book provides a coherent and comprehensive introduction to the fundamentals of optical communications, signal processing and coding for optical channels. It is the first to integrate the fundamentals of coding theory and optical communication.

  9. SEVERO code - user's manual

    Sacramento, A.M. do.

    1989-01-01

    This user's manual contains all the necessary information concerning the use of SEVERO code. This computer code is related to the statistics of extremes = extreme winds, extreme precipitation and flooding hazard risk analysis. (A.C.A.S.)

  10. Synthesizing Certified Code

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach for formally demonstrating software quality. Its basic idea is to require code producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates that can be checked independently. Since code certification uses the same underlying technology as program verification, it requires detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding annotations to th...

  11. FERRET data analysis code

    Schmittroth, F.

    1979-09-01

    A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples

  12. Stylize Aesthetic QR Code

    Xu, Mingliang; Su, Hao; Li, Yafei; Li, Xi; Liao, Jing; Niu, Jianwei; Lv, Pei; Zhou, Bing

    2018-01-01

    With the continued proliferation of smart mobile devices, Quick Response (QR) code has become one of the most-used types of two-dimensional code in the world. Aiming at beautifying the appearance of QR codes, existing works have developed a series of techniques to make the QR code more visual-pleasant. However, these works still leave much to be desired, such as visual diversity, aesthetic quality, flexibility, universal property, and robustness. To address these issues, in this paper, we pro...

  13. Enhancing QR Code Security

    Zhang, Linfan; Zheng, Shuang

    2015-01-01

    Quick Response code opens possibility to convey data in a unique way yet insufficient prevention and protection might lead into QR code being exploited on behalf of attackers. This thesis starts by presenting a general introduction of background and stating two problems regarding QR code security, which followed by a comprehensive research on both QR code itself and related issues. From the research a solution taking advantages of cloud and cryptography together with an implementation come af...

  14. Development of code PRETOR for stellarator simulation

    Dies, J.; Fontanet, J.; Fontdecaba, J.M.; Castejon, F.; Alejandre, C.

    1998-01-01

    The Department de Fisica i Enginyeria Nuclear (DFEN) of the UPC has some experience in the development of the transport code PRETOR. This code has been validated with shots of DIII-D, JET and TFTR, it has also been used in the simulation of operational scenarios of ITER fast burnt termination. Recently, the association EURATOM-CIEMAT has started the operation of the TJ-II stellarator. Due to the need of validating the results given by others transport codes applied to stellarators and because all of them made some approximations, as a averaging magnitudes in each magnetic surface, it was thought suitable to adapt the PRETOR code to devices without axial symmetry, like stellarators, which is very suitable for the specific needs of the study of TJ-II. Several modifications are required in PRETOR; the main concerns to the models of: magnetic equilibrium, geometry and transport of energy and particles. In order to solve the complex magnetic equilibrium geometry the powerful numerical code VMEC has been used. This code gives the magnetic surface shape as a Fourier series in terms of the harmonics (m,n). Most of the geometric magnitudes are also obtained from the VMEC results file. The energy and particle transport models will be replaced by other phenomenological models that are better adapted to stellarator simulation. Using the proposed models, it is pretended to reproduce experimental data available from present stellarators, given especial attention to the TJ-II of the association EURATOM-CIEMAT. (Author)

  15. Opening up codings?

    Steensig, Jakob; Heinemann, Trine

    2015-01-01

    doing formal coding and when doing more “traditional” conversation analysis research based on collections. We are more wary, however, of the implication that coding-based research is the end result of a process that starts with qualitative investigations and ends with categories that can be coded...

  16. Gauge color codes

    Bombin Palomo, Hector

    2015-01-01

    Color codes are topological stabilizer codes with unusual transversality properties. Here I show that their group of transversal gates is optimal and only depends on the spatial dimension, not the local geometry. I also introduce a generalized, subsystem version of color codes. In 3D they allow...

  17. Refactoring test code

    A. van Deursen (Arie); L.M.F. Moonen (Leon); A. van den Bergh; G. Kok

    2001-01-01

    textabstractTwo key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from

  18. Ambiguous Adaptation

    Møller Larsen, Marcus; Lyngsie, Jacob

    2017-01-01

    We investigate the connection between contract duration, relational mechanisms, and premature relationship termination. Based on an analysis of a large sample of exchange relationships in the global service-provider industry, we argue that investments in either longer contract duration or more in...... ambiguous reference points for adaption and thus increase the likelihood of premature termination by restricting the parties' set of adaptive actions....

  19. Climate adaptation

    Kinzig, Ann P.

    2015-03-01

    This paper is intended as a brief introduction to climate adaptation in a conference devoted otherwise to the physics of sustainable energy. Whereas mitigation involves measures to reduce the probability of a potential event, such as climate change, adaptation refers to actions that lessen the impact of climate change. Mitigation and adaptation differ in other ways as well. Adaptation does not necessarily have to be implemented immediately to be effective; it only needs to be in place before the threat arrives. Also, adaptation does not necessarily require global, coordinated action; many effective adaptation actions can be local. Some urban communities, because of land-use change and the urban heat-island effect, currently face changes similar to some expected under climate change, such as changes in water availability, heat-related morbidity, or changes in disease patterns. Concern over those impacts might motivate the implementation of measures that would also help in climate adaptation, despite skepticism among some policy makers about anthropogenic global warming. Studies of ancient civilizations in the southwestern US lends some insight into factors that may or may not be important to successful adaptation.

  20. Benchmarking LWR codes capability to model radionuclide deposition within SFR containments: An analysis of the Na ABCOVE tests

    Herranz, Luis E., E-mail: luisen.herranz@ciemat.es [CIEMAT, Unit of Nuclear Safety Research, Av. Complutense, 40, 28040 Madrid (Spain); Garcia, Monica, E-mail: monica.gmartin@ciemat.es [CIEMAT, Unit of Nuclear Safety Research, Av. Complutense, 40, 28040 Madrid (Spain); Morandi, Sonia, E-mail: sonia.morandi@rse-web.it [Nuclear and Industrial Plant Safety Team, Power Generation System Department, RSE, via Rubattino 54, 20134 Milano (Italy)

    2013-12-15

    Highlights: • Assessment of LWR codes capability to model aerosol deposition within SFR containments. • Original hypotheses proposed to partially accommodate drawbacks from Na oxidation reactions. • A defined methodology to derive a more accurate characterization of Na-based particles. • Key missing models in LWR codes for SFR applications are identified. - Abstract: Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide transport, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide deposition, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. The present paper is aimed at assessing the current capability of LWR codes to model aerosol deposition within a SFR containment under BDBA conditions. Through a systematic application of the ASTEC, ECART and MELCOR codes to relevant ABCOVE tests, insights have been gained into drawbacks and capabilities of these computation tools. Hypotheses and approximations have

  1. Benchmarking LWR codes capability to model radionuclide deposition within SFR containments: An analysis of the Na ABCOVE tests

    Herranz, Luis E.; Garcia, Monica; Morandi, Sonia

    2013-01-01

    Highlights: • Assessment of LWR codes capability to model aerosol deposition within SFR containments. • Original hypotheses proposed to partially accommodate drawbacks from Na oxidation reactions. • A defined methodology to derive a more accurate characterization of Na-based particles. • Key missing models in LWR codes for SFR applications are identified. - Abstract: Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide transport, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide deposition, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. The present paper is aimed at assessing the current capability of LWR codes to model aerosol deposition within a SFR containment under BDBA conditions. Through a systematic application of the ASTEC, ECART and MELCOR codes to relevant ABCOVE tests, insights have been gained into drawbacks and capabilities of these computation tools. Hypotheses and approximations have been adopted so that

  2. Software Certification - Coding, Code, and Coders

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  3. Adaptive steganography

    Chandramouli, Rajarathnam; Li, Grace; Memon, Nasir D.

    2002-04-01

    Steganalysis techniques attempt to differentiate between stego-objects and cover-objects. In recent work we developed an explicit analytic upper bound for the steganographic capacity of LSB based steganographic techniques for a given false probability of detection. In this paper we look at adaptive steganographic techniques. Adaptive steganographic techniques take explicit steps to escape detection. We explore different techniques that can be used to adapt message embedding to the image content or to a known steganalysis technique. We investigate the advantages of adaptive steganography within an analytical framework. We also give experimental results with a state-of-the-art steganalysis technique demonstrating that adaptive embedding results in a significant number of bits embedded without detection.

  4. Adaptive Lighting

    Petersen, Kjell Yngve; Søndergaard, Karin; Kongshaug, Jesper

    2015-01-01

    the investigations of lighting scenarios carried out in two test installations: White Cube and White Box. The test installations are discussed as large-scale experiential instruments. In these test installations we examine what could potentially occur when light using LED technology is integrated and distributed......Adaptive Lighting Adaptive lighting is based on a partial automation of the possibilities to adjust the colour tone and brightness levels of light in order to adapt to people’s needs and desires. IT support is key to the technical developments that afford adaptive control systems. The possibilities...... differently into an architectural body. We also examine what might occur when light is dynamic and able to change colour, intensity and direction, and when it is adaptive and can be brought into interaction with its surroundings. In short, what happens to an architectural space when artificial lighting ceases...

  5. The network code

    1997-01-01

    The Network Code defines the rights and responsibilities of all users of the natural gas transportation system in the liberalised gas industry in the United Kingdom. This report describes the operation of the Code, what it means, how it works and its implications for the various participants in the industry. The topics covered are: development of the competitive gas market in the UK; key points in the Code; gas transportation charging; impact of the Code on producers upstream; impact on shippers; gas storage; supply point administration; impact of the Code on end users; the future. (20 tables; 33 figures) (UK)

  6. NAGRADATA. Code key. Geology

    Mueller, W.H.; Schneider, B.; Staeuble, J.

    1984-01-01

    This reference manual provides users of the NAGRADATA system with comprehensive keys to the coding/decoding of geological and technical information to be stored in or retreaved from the databank. Emphasis has been placed on input data coding. When data is retreaved the translation into plain language of stored coded information is done automatically by computer. Three keys each, list the complete set of currently defined codes for the NAGRADATA system, namely codes with appropriate definitions, arranged: 1. according to subject matter (thematically) 2. the codes listed alphabetically and 3. the definitions listed alphabetically. Additional explanation is provided for the proper application of the codes and the logic behind the creation of new codes to be used within the NAGRADATA system. NAGRADATA makes use of codes instead of plain language for data storage; this offers the following advantages: speed of data processing, mainly data retrieval, economies of storage memory requirements, the standardisation of terminology. The nature of this thesaurian type 'key to codes' makes it impossible to either establish a final form or to cover the entire spectrum of requirements. Therefore, this first issue of codes to NAGRADATA must be considered to represent the current state of progress of a living system and future editions will be issued in a loose leave ringbook system which can be updated by an organised (updating) service. (author)

  7. XSOR codes users manual

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  8. Reactor lattice codes

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  9. DLLExternalCode

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  10. Adaptation improves face trustworthiness discrimination

    Bruce D Keefe

    2013-06-01

    Full Text Available Adaptation to facial characteristics, such as gender and viewpoint, has been shown to both bias our perception of faces and improve facial discrimination. In this study, we examined whether adapting to two levels of face trustworthiness improved sensitivity around the adapted level. Facial trustworthiness was manipulated by morphing between trustworthy and untrustworthy prototypes, each generated by morphing eight trustworthy and eight untrustworthy faces respectively. In the first experiment, just-noticeable differences (JNDs were calculated for an untrustworthy face after participants adapted to an untrustworthy face, a trustworthy face, or did not adapt. In the second experiment, the three conditions were identical, except that JNDs were calculated for a trustworthy face. In the third experiment we examined whether adapting to an untrustworthy male face improved discrimination to an untrustworthy female face. In all experiments, participants completed a two-interval forced-choice adaptive staircase procedure, in which they judged which face was more untrustworthy. JNDs were derived from a psychometric function fitted to the data. Adaptation improved sensitivity to faces conveying the same level of trustworthiness when compared to no adaptation. When adapting to and discriminating around a different level of face trustworthiness there was no improvement in sensitivity and JNDs were equivalent to those in the no adaptation condition. The improvement in sensitivity was found to occur even when adapting to a face with different gender and identity. These results suggest that adaptation to facial trustworthiness can selectively enhance mechanisms underlying the coding of facial trustworthiness to improve perceptual sensitivity. These findings have implications for the role of our visual experience in the decisions we make about the trustworthiness of other individuals.

  11. Practical adaptive quantum tomography

    Granade, Christopher; Ferrie, Christopher; Flammia, Steven T.

    2017-11-01

    We introduce a fast and accurate heuristic for adaptive tomography that addresses many of the limitations of prior methods. Previous approaches were either too computationally intensive or tailored to handle special cases such as single qubits or pure states. By contrast, our approach combines the efficiency of online optimization with generally applicable and well-motivated data-processing techniques. We numerically demonstrate these advantages in several scenarios including mixed states, higher-dimensional systems, and restricted measurements. http://cgranade.com complete data and source code for this work are available online [1], and can be previewed at https://goo.gl/koiWxR.

  12. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  13. Adaptation Insights

    Addressing Climate Change Adaptation in Africa through Participatory Action Research. A Regional Observatory ... while the average annual rainfall recorded between. 1968 and 1999 was .... the region of Thies. For sustainability reasons, the.

  14. Adaptation Stories

    By Reg'

    adaptation to climate change from various regions of the Sahel. Their .... This simple system, whose cost and maintenance were financially sustainable, brought ... method that enables him to learn from experience and save time, which he ...

  15. An Optimal Linear Coding for Index Coding Problem

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  16. The Aesthetics of Coding

    Andersen, Christian Ulrik

    2007-01-01

    Computer art is often associated with computer-generated expressions (digitally manipulated audio/images in music, video, stage design, media facades, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated...... code, etc.). The presentation relates this artistic fascination of code to a media critique expressed by Florian Cramer, claiming that the graphical interface represents a media separation (of text/code and image) causing alienation to the computer’s materiality. Cramer is thus the voice of a new ‘code...... avant-garde’. In line with Cramer, the artists Alex McLean and Adrian Ward (aka Slub) declare: “art-oriented programming needs to acknowledge the conditions of its own making – its poesis.” By analysing the Live Coding performances of Slub (where they program computer music live), the presentation...

  17. Majorana fermion codes

    Bravyi, Sergey; Terhal, Barbara M; Leemhuis, Bernhard

    2010-01-01

    We initiate the study of Majorana fermion codes (MFCs). These codes can be viewed as extensions of Kitaev's one-dimensional (1D) model of unpaired Majorana fermions in quantum wires to higher spatial dimensions and interacting fermions. The purpose of MFCs is to protect quantum information against low-weight fermionic errors, that is, operators acting on sufficiently small subsets of fermionic modes. We examine to what extent MFCs can surpass qubit stabilizer codes in terms of their stability properties. A general construction of 2D MFCs is proposed that combines topological protection based on a macroscopic code distance with protection based on fermionic parity conservation. Finally, we use MFCs to show how to transform any qubit stabilizer code to a weakly self-dual CSS code.

  18. Theory of epigenetic coding.

    Elder, D

    1984-06-07

    The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.

  19. DISP1 code

    Vokac, P.

    1999-12-01

    DISP1 code is a simple tool for assessment of the dispersion of the fission product cloud escaping from a nuclear power plant after an accident. The code makes it possible to tentatively check the feasibility of calculations by more complex PSA3 codes and/or codes for real-time dispersion calculations. The number of input parameters is reasonably low and the user interface is simple enough to allow a rapid processing of sensitivity analyses. All input data entered through the user interface are stored in the text format. Implementation of dispersion model corrections taken from the ARCON96 code enables the DISP1 code to be employed for assessment of the radiation hazard within the NPP area, in the control room for instance. (P.A.)

  20. Phonological coding during reading.

    Leinenger, Mallorie

    2014-11-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  1. The aeroelastic code FLEXLAST

    Visser, B. [Stork Product Eng., Amsterdam (Netherlands)

    1996-09-01

    To support the discussion on aeroelastic codes, a description of the code FLEXLAST was given and experiences within benchmarks and measurement programmes were summarized. The code FLEXLAST has been developed since 1982 at Stork Product Engineering (SPE). Since 1992 FLEXLAST has been used by Dutch industries for wind turbine and rotor design. Based on the comparison with measurements, it can be concluded that the main shortcomings of wind turbine modelling lie in the field of aerodynamics, wind field and wake modelling. (au)

  2. Adaptive Forward Error Correction for Energy Efficient Optical Transport Networks

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2013-01-01

    In this paper we propose a novel scheme for on the fly code rate adjustment for forward error correcting (FEC) codes on optical links. The proposed scheme makes it possible to adjust the code rate independently for each optical frame. This allows for seamless rate adaption based on the link state...

  3. Is adaptation. Truly an adaptation? Is adaptation. Truly an adaptation?

    Thais Flores Nogueira Diniz

    2008-04-01

    Full Text Available The article begins by historicizing film adaptation from the arrival of cinema, pointing out the many theoretical approaches under which the process has been seen: from the concept of “the same story told in a different medium” to a comprehensible definition such as “the process through which works can be transformed, forming an intersection of textual surfaces, quotations, conflations and inversions of other texts”. To illustrate this new concept, the article discusses Spike Jonze’s film Adaptation. according to James Naremore’s proposal which considers the study of adaptation as part of a general theory of repetition, joined with the study of recycling, remaking, and every form of retelling. The film deals with the attempt by the scriptwriter Charles Kaufman, cast by Nicholas Cage, to adapt/translate a non-fictional book to the cinema, but ends up with a kind of film which is by no means what it intended to be: a film of action in the model of Hollywood productions. During the process of creation, Charles and his twin brother, Donald, undergo a series of adventures involving some real persons from the world of film, the author and the protagonist of the book, all of them turning into fictional characters in the film. In the film, adaptation then signifies something different from itstraditional meaning. The article begins by historicizing film adaptation from the arrival of cinema, pointing out the many theoretical approaches under which the process has been seen: from the concept of “the same story told in a different medium” to a comprehensible definition such as “the process through which works can be transformed, forming an intersection of textual surfaces, quotations, conflations and inversions of other texts”. To illustrate this new concept, the article discusses Spike Jonze’s film Adaptation. according to James Naremore’s proposal which considers the study of adaptation as part of a general theory of repetition

  4. MORSE Monte Carlo code

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  5. QR codes for dummies

    Waters, Joe

    2012-01-01

    Find out how to effectively create, use, and track QR codes QR (Quick Response) codes are popping up everywhere, and businesses are reaping the rewards. Get in on the action with the no-nonsense advice in this streamlined, portable guide. You'll find out how to get started, plan your strategy, and actually create the codes. Then you'll learn to link codes to mobile-friendly content, track your results, and develop ways to give your customers value that will keep them coming back. It's all presented in the straightforward style you've come to know and love, with a dash of humor thrown

  6. Tokamak Systems Code

    Reid, R.L.; Barrett, R.J.; Brown, T.G.

    1985-03-01

    The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged

  7. Efficient Coding of Information: Huffman Coding -RE ...

    to a stream of equally-likely symbols so as to recover the original stream in the event of errors. The for- ... The source-coding problem is one of finding a mapping from U to a ... probability that the random variable X takes the value x written as ...

  8. NR-code: Nonlinear reconstruction code

    Yu, Yu; Pen, Ue-Li; Zhu, Hong-Ming

    2018-04-01

    NR-code applies nonlinear reconstruction to the dark matter density field in redshift space and solves for the nonlinear mapping from the initial Lagrangian positions to the final redshift space positions; this reverses the large-scale bulk flows and improves the precision measurement of the baryon acoustic oscillations (BAO) scale.

  9. Research and Design in Unified Coding Architecture for Smart Grids

    Gang Han

    2013-09-01

    Full Text Available Standardized and sharing information platform is the foundation of the Smart Grids. In order to improve the dispatching center information integration of the power grids and achieve efficient data exchange, sharing and interoperability, a unified coding architecture is proposed. The architecture includes coding management layer, coding generation layer, information models layer and application system layer. Hierarchical design makes the whole coding architecture to adapt to different application environments, different interfaces, loosely coupled requirements, which can realize the integration model management function of the power grids. The life cycle and evaluation method of survival of unified coding architecture is proposed. It can ensure the stability and availability of the coding architecture. Finally, the development direction of coding technology of the Smart Grids in future is prospected.

  10. Synthesizing Certified Code

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  11. Code of Ethics

    Division for Early Childhood, Council for Exceptional Children, 2009

    2009-01-01

    The Code of Ethics of the Division for Early Childhood (DEC) of the Council for Exceptional Children is a public statement of principles and practice guidelines supported by the mission of DEC. The foundation of this Code is based on sound ethical reasoning related to professional practice with young children with disabilities and their families…

  12. Interleaved Product LDPC Codes

    Baldi, Marco; Cancellieri, Giovanni; Chiaraluce, Franco

    2011-01-01

    Product LDPC codes take advantage of LDPC decoding algorithms and the high minimum distance of product codes. We propose to add suitable interleavers to improve the waterfall performance of LDPC decoding. Interleaving also reduces the number of low weight codewords, that gives a further advantage in the error floor region.

  13. Insurance billing and coding.

    Napier, Rebecca H; Bruelheide, Lori S; Demann, Eric T K; Haug, Richard H

    2008-07-01

    The purpose of this article is to highlight the importance of understanding various numeric and alpha-numeric codes for accurately billing dental and medically related services to private pay or third-party insurance carriers. In the United States, common dental terminology (CDT) codes are most commonly used by dentists to submit claims, whereas current procedural terminology (CPT) and International Classification of Diseases, Ninth Revision, Clinical Modification (ICD.9.CM) codes are more commonly used by physicians to bill for their services. The CPT and ICD.9.CM coding systems complement each other in that CPT codes provide the procedure and service information and ICD.9.CM codes provide the reason or rationale for a particular procedure or service. These codes are more commonly used for "medical necessity" determinations, and general dentists and specialists who routinely perform care, including trauma-related care, biopsies, and dental treatment as a result of or in anticipation of a cancer-related treatment, are likely to use these codes. Claim submissions for care provided can be completed electronically or by means of paper forms.

  14. Error Correcting Codes

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  15. Scrum Code Camps

    Pries-Heje, Lene; Pries-Heje, Jan; Dalgaard, Bente

    2013-01-01

    is required. In this paper we present the design of such a new approach, the Scrum Code Camp, which can be used to assess agile team capability in a transparent and consistent way. A design science research approach is used to analyze properties of two instances of the Scrum Code Camp where seven agile teams...

  16. RFQ simulation code

    Lysenko, W.P.

    1984-04-01

    We have developed the RFQLIB simulation system to provide a means to systematically generate the new versions of radio-frequency quadrupole (RFQ) linac simulation codes that are required by the constantly changing needs of a research environment. This integrated system simplifies keeping track of the various versions of the simulation code and makes it practical to maintain complete and up-to-date documentation. In this scheme, there is a certain standard version of the simulation code that forms a library upon which new versions are built. To generate a new version of the simulation code, the routines to be modified or added are appended to a standard command file, which contains the commands to compile the new routines and link them to the routines in the library. The library itself is rarely changed. Whenever the library is modified, however, this modification is seen by all versions of the simulation code, which actually exist as different versions of the command file. All code is written according to the rules of structured programming. Modularity is enforced by not using COMMON statements, simplifying the relation of the data flow to a hierarchy diagram. Simulation results are similar to those of the PARMTEQ code, as expected, because of the similar physical model. Different capabilities, such as those for generating beams matched in detail to the structure, are available in the new code for help in testing new ideas in designing RFQ linacs

  17. Error Correcting Codes

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  18. 78 FR 18321 - International Code Council: The Update Process for the International Codes and Standards

    2013-03-26

    ... Energy Conservation Code. International Existing Building Code. International Fire Code. International... Code. International Property Maintenance Code. International Residential Code. International Swimming Pool and Spa Code International Wildland-Urban Interface Code. International Zoning Code. ICC Standards...

  19. Validation of thermalhydraulic codes

    Wilkie, D.

    1992-01-01

    Thermalhydraulic codes require to be validated against experimental data collected over a wide range of situations if they are to be relied upon. A good example is provided by the nuclear industry where codes are used for safety studies and for determining operating conditions. Errors in the codes could lead to financial penalties, to the incorrect estimation of the consequences of accidents and even to the accidents themselves. Comparison between prediction and experiment is often described qualitatively or in approximate terms, e.g. ''agreement is within 10%''. A quantitative method is preferable, especially when several competing codes are available. The codes can then be ranked in order of merit. Such a method is described. (Author)

  20. Fracture flow code

    Dershowitz, W; Herbert, A.; Long, J.

    1989-03-01

    The hydrology of the SCV site will be modelled utilizing discrete fracture flow models. These models are complex, and can not be fully cerified by comparison to analytical solutions. The best approach for verification of these codes is therefore cross-verification between different codes. This is complicated by the variation in assumptions and solution techniques utilized in different codes. Cross-verification procedures are defined which allow comparison of the codes developed by Harwell Laboratory, Lawrence Berkeley Laboratory, and Golder Associates Inc. Six cross-verification datasets are defined for deterministic and stochastic verification of geometric and flow features of the codes. Additional datasets for verification of transport features will be documented in a future report. (13 figs., 7 tabs., 10 refs.) (authors)

  1. Monte Carlo simulation code modernization

    CERN. Geneva

    2015-01-01

    The continual development of sophisticated transport simulation algorithms allows increasingly accurate description of the effect of the passage of particles through matter. This modelling capability finds applications in a large spectrum of fields from medicine to astrophysics, and of course HEP. These new capabilities however come at the cost of a greater computational intensity of the new models, which has the effect of increasing the demands of computing resources. This is particularly true for HEP, where the demand for more simulation are driven by the need of both more accuracy and more precision, i.e. better models and more events. Usually HEP has relied on the "Moore's law" evolution, but since almost ten years the increase in clock speed has withered and computing capacity comes in the form of hardware architectures of many-core or accelerated processors. To harness these opportunities we need to adapt our code to concurrent programming models taking advantages of both SIMD and SIMT architectures. Th...

  2. Strategic Adaptation

    Andersen, Torben Juul

    2015-01-01

    This article provides an overview of theoretical contributions that have influenced the discourse around strategic adaptation including contingency perspectives, strategic fit reasoning, decision structure, information processing, corporate entrepreneurship, and strategy process. The related...... concepts of strategic renewal, dynamic managerial capabilities, dynamic capabilities, and strategic response capabilities are discussed and contextualized against strategic responsiveness. The insights derived from this article are used to outline the contours of a dynamic process of strategic adaptation....... This model incorporates elements of central strategizing, autonomous entrepreneurial behavior, interactive information processing, and open communication systems that enhance the organization's ability to observe exogenous changes and respond effectively to them....

  3. Adaptive Lighting

    Petersen, Kjell Yngve; Kongshaug, Jesper; Søndergaard, Karin

    2015-01-01

    offered by adaptive lighting control are created by the ways that the system components, the network and data flow can be coordinated through software so that the dynamic variations are controlled in ways that meaningfully adapt according to people’s situations and design intentions. This book discusses...... to be static, and no longer acts as a kind of spatial constancy maintaining stability and order? Moreover, what new potentials open in lighting design? This book is one of four books that is published in connection with the research project entitled LED Lighting; Interdisciplinary LED Lighting Research...

  4. Adaptive test

    Kjeldsen, Lars Peter; Eriksen, Mette Rose

    2010-01-01

    Artikelen er en evaluering af de adaptive tests, som blev indført i folkeskolen. Artiklen sætter særligt fokus på evaluering i folkeskolen, herunder bidrager den med vejledning til evaluering, evalueringsværktøjer og fagspecifkt evalueringsmateriale.......Artikelen er en evaluering af de adaptive tests, som blev indført i folkeskolen. Artiklen sætter særligt fokus på evaluering i folkeskolen, herunder bidrager den med vejledning til evaluering, evalueringsværktøjer og fagspecifkt evalueringsmateriale....

  5. Huffman coding in advanced audio coding standard

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  6. Is adaptation. Truly an adaptation?

    Thais Flores Nogueira Diniz

    2006-04-01

    Full Text Available The article begins by historicizing film adaptation from the arrival of cinema, pointing out the many theoretical approaches under which the process has been seen: from the concept of “the same story told in a different medium” to a comprehensible definition such as “the process through which works can be transformed, forming an intersection of textual surfaces, quotations, conflations and inversions of other texts”. To illustrate this new concept, the article discusses Spike Jonze’s film Adaptation. according to James Naremore’s proposal which considers the study of adaptation as part of a general theory of repetition, joined with the study of recycling, remaking, and every form of retelling. The film deals with the attempt by the scriptwriter Charles Kaufman, cast by Nicholas Cage, to adapt/translate a non-fictional book to the cinema, but ends up with a kind of film which is by no means what it intended to be: a film of action in the model of Hollywood productions. During the process of creation, Charles and his twin brother, Donald, undergo a series of adventures involving some real persons from the world of film, the author and the protagonist of the book, all of them turning into fictional characters in the film. In the film, adaptation then signifies something different from itstraditional meaning.

  7. Filtering, Coding, and Compression with Malvar Wavelets

    1993-12-01

    speech coding techniques being investigated by the military (38). Imagery: Space imagery often requires adaptive restoration to deblur out-of-focus...and blurred image, find an estimate of the ideal image using a priori information about the blur, noise , and the ideal image" (12). The research for...recording can be described as the original signal convolved with impulses , which appear as echoes in the seismic event. The term deconvolution indicates

  8. General purpose code for Monte Carlo simulations

    Wilcke, W.W.

    1983-01-01

    A general-purpose computer called MONTHY has been written to perform Monte Carlo simulations of physical systems. To achieve a high degree of flexibility the code is organized like a general purpose computer, operating on a vector describing the time dependent state of the system under simulation. The instruction set of the computer is defined by the user and is therefore adaptable to the particular problem studied. The organization of MONTHY allows iterative and conditional execution of operations

  9. Adaptability of supercomputers to nuclear computations

    Asai, Kiyoshi; Ishiguro, Misako; Matsuura, Toshihiko.

    1983-01-01

    Recently in the field of scientific and technical calculation, the usefulness of supercomputers represented by CRAY-1 has been recognized, and they are utilized in various countries. The rapid computation of supercomputers is based on the function of vector computation. The authors investigated the adaptability to vector computation of about 40 typical atomic energy codes for the past six years. Based on the results of investigation, the adaptability of the function of vector computation that supercomputers have to atomic energy codes, the problem regarding the utilization and the future prospect are explained. The adaptability of individual calculation codes to vector computation is largely dependent on the algorithm and program structure used for the codes. The change to high speed by pipeline vector system, the investigation in the Japan Atomic Energy Research Institute and the results, and the examples of expressing the codes for atomic energy, environmental safety and nuclear fusion by vector are reported. The magnification of speed up for 40 examples was from 1.5 to 9.0. It can be said that the adaptability of supercomputers to atomic energy codes is fairly good. (Kako, I.)

  10. Adaptation is...

    IDRC

    vital sector is under threat. While it is far from the only development challenge facing local farmers, extreme variations in the climate of West Africa in the past several decades have dealt the region a bad hand. Drought and flood now follow each other in succession. Adaptation is... “The floods spoiled our harvests and we.

  11. Ambiguous Adaptation

    Møller Larsen, Marcus; Lyngsie, Jacob

    and reciprocal adaptation of informal governance structure create ambiguity in situations of contingencies, which, subsequently, increases the likelihood of premature relationship termination. Using a large sample of exchange relationships in the global service provider industry, we find support for a hypothesis...

  12. Report number codes

    Nelson, R.N. (ed.)

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  13. Report number codes

    Nelson, R.N.

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name

  14. Models and applications of the UEDGE code

    Rensink, M.E.; Knoll, D.A.; Porter, G.D.; Rognlien, T.D.; Smith, G.R.; Wising, F.

    1996-09-01

    The transport of particles and energy from the core of a tokamak to nearby material surfaces is an important problem for understanding present experiments and for designing reactor-grade devices. A number of fluid transport codes have been developed to model the plasma in the edge and scrape-off layer (SOL) regions. This report will focus on recent model improvements and illustrative results from the UEDGE code. Some geometric and mesh considerations are introduced, followed by a general description of the plasma and neutral fluid models. A few comments on computational issues are given and then two important applications are illustrated concerning benchmarking and the ITER radiative divertor. Finally, we report on some recent work to improve the models in UEDGE by coupling to a Monte Carlo neutrals code and by utilizing an adaptive grid

  15. Sparsity in Linear Predictive Coding of Speech

    Giacobello, Daniele

    of the effectiveness of their application in audio processing. The second part of the thesis deals with introducing sparsity directly in the linear prediction analysis-by-synthesis (LPAS) speech coding paradigm. We first propose a novel near-optimal method to look for a sparse approximate excitation using a compressed...... one with direct applications to coding but also consistent with the speech production model of voiced speech, where the excitation of the all-pole filter can be modeled as an impulse train, i.e., a sparse sequence. Introducing sparsity in the LP framework will also bring to de- velop the concept...... sensing formulation. Furthermore, we define a novel re-estimation procedure to adapt the predictor coefficients to the given sparse excitation, balancing the two representations in the context of speech coding. Finally, the advantages of the compact parametric representation of a segment of speech, given...

  16. NALAP: an LMFBR system transient code

    Martin, B.A.; Agrawal, A.K.; Albright, D.C.; Epel, L.G.; Maise, G.

    1975-07-01

    NALAP is a LMFBR system transient code. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic response of sodium cooled fast breeder reactors when subjected to postulated accidents such as a massive pipe break as well as a variety of other upset conditions that do not disrupt the system geometry. Various components of the plant are represented by control volumes. These control volumes are connected by junctions some of which may be leak or fill junctions. The fluid flow equations are modeled as compressible, single-stream flow with momentum flux in one dimension. The transient response is computed by integrating the thermal-hydraulic conservation equations from user-initialized operating conditions by an implicit numerical scheme. Point kinetics approximation is used to represent the time dependent heat generation in the reactor core

  17. Cryptography cracking codes

    2014-01-01

    While cracking a code might seem like something few of us would encounter in our daily lives, it is actually far more prevalent than we may realize. Anyone who has had personal information taken because of a hacked email account can understand the need for cryptography and the importance of encryption-essentially the need to code information to keep it safe. This detailed volume examines the logic and science behind various ciphers, their real world uses, how codes can be broken, and the use of technology in this oft-overlooked field.

  18. Coded Splitting Tree Protocols

    Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar

    2013-01-01

    This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... instance is terminated prematurely and subsequently iterated. The combined set of leaves from all the tree instances can then be viewed as a graph code, which is decodable using belief propagation. The main design problem is determining the order of splitting, which enables successful decoding as early...

  19. Transport theory and codes

    Clancy, B.E.

    1986-01-01

    This chapter begins with a neutron transport equation which includes the one dimensional plane geometry problems, the one dimensional spherical geometry problems, and numerical solutions. The section on the ANISN code and its look-alikes covers problems which can be solved; eigenvalue problems; outer iteration loop; inner iteration loop; and finite difference solution procedures. The input and output data for ANISN is also discussed. Two dimensional problems such as the DOT code are given. Finally, an overview of the Monte-Carlo methods and codes are elaborated on

  20. Gravity inversion code

    Burkhard, N.R.

    1979-01-01

    The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables

  1. Implementation of computer codes for performance assessment of the Republic repository of LLW/ILW Mochovce

    Hanusik, V.; Kopcani, I.; Gedeon, M.

    2000-01-01

    This paper describes selection and adaptation of computer codes required to assess the effects of radionuclide release from Mochovce Radioactive Waste Disposal Facility. The paper also demonstrates how these codes can be integrated into performance assessment methodology. The considered codes include DUST-MS for source term release, MODFLOW for ground-water flow and BS for transport through biosphere and dose assessment. (author)

  2. Application of the integral code MELCOR for German NPPs and use within accident management and PSA projects

    Sonnenkalb, Martin

    2006-01-01

    The paper summarizes the application of MELCOR to German NPPS with PWR and BWR. A development of different code systems like ATHLET/ATHLET-CD, COCOSYS and ASTEC is done as well at GRS but it is not discussed in this paper. GRS has been using MELCOR since 1990 for real plant calculations. The results of MELCOR analyses are used mainly in PSA level 2 studies and in Accident Management projects for both types of NPPs. MELCOR has been a very useful and robust tool for these analyses. The calculations performed within the PSA level 2 studies for both types of German NPPs have shown that typical severe accident scenarios are characterized by several phases and that the consideration of plant specifics are important not only for realistic source term calculations. An overview of typically severe accident phases together with main accident management measures installed in German NPPs is presented in the paper. Several severe accident sequences have been calculated for both reactor types and some detailed nodalisation studies and code to code comparisons have been prepared in the past, to prove the developed core, reactor circuit and containment/building nodalisation schemes. Together with the compilation of the MELCOR data set, the qualification of the nodalisation schemes has been pursued with comparative calculations with detailed GRS codes for selected phases of severe accidents. The results of these comparative analyses showed in most of the areas a good agreement of essential parameters and of the general description of the plant behaviour during the accident progression. The in general detail of the German plant nodalisation schemes developed for MELCOR contributes significantly to this good agreement between integral and detailed code results. The implementation of MELCOR into the GRS simulator ATLAS was very important for the assessment of the results, not only due to the great detail of the nodalisation schemes used. It is used for training of severe accident

  3. Fulcrum Network Codes

    2015-01-01

    Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....

  4. Supervised Convolutional Sparse Coding

    Affara, Lama Ahmed; Ghanem, Bernard; Wonka, Peter

    2018-01-01

    coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements

  5. SASSYS LMFBR systems code

    Dunn, F.E.; Prohammer, F.G.; Weber, D.P.

    1983-01-01

    The SASSYS LMFBR systems analysis code is being developed mainly to analyze the behavior of the shut-down heat-removal system and the consequences of failures in the system, although it is also capable of analyzing a wide range of transients, from mild operational transients through more severe transients leading to sodium boiling in the core and possible melting of clad and fuel. The code includes a detailed SAS4A multi-channel core treatment plus a general thermal-hydraulic treatment of the primary and intermediate heat-transport loops and the steam generators. The code can handle any LMFBR design, loop or pool, with an arbitrary arrangement of components. The code is fast running: usually faster than real time

  6. OCA Code Enforcement

    Montgomery County of Maryland — The Office of the County Attorney (OCA) processes Code Violation Citations issued by County agencies. The citations can be viewed by issued department, issued date...

  7. The fast code

    Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)

    1996-09-01

    The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)

  8. Code Disentanglement: Initial Plan

    Wohlbier, John Greaton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelley, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rockefeller, Gabriel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Calef, Matthew Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-01-27

    The first step to making more ambitious changes in the EAP code base is to disentangle the code into a set of independent, levelized packages. We define a package as a collection of code, most often across a set of files, that provides a defined set of functionality; a package a) can be built and tested as an entity and b) fits within an overall levelization design. Each package contributes one or more libraries, or an application that uses the other libraries. A package set is levelized if the relationships between packages form a directed, acyclic graph and each package uses only packages at lower levels of the diagram (in Fortran this relationship is often describable by the use relationship between modules). Independent packages permit independent- and therefore parallel|development. The packages form separable units for the purposes of development and testing. This is a proven path for enabling finer-grained changes to a complex code.

  9. Induction technology optimization code

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-01-01

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs

  10. VT ZIP Code Areas

    Vermont Center for Geographic Information — (Link to Metadata) A ZIP Code Tabulation Area (ZCTA) is a statistical geographic entity that approximates the delivery area for a U.S. Postal Service five-digit...

  11. Bandwidth efficient coding

    Anderson, John B

    2017-01-01

    Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.

  12. Reactor lattice codes

    Kulikowska, T.

    2001-01-01

    The description of reactor lattice codes is carried out on the example of the WIMSD-5B code. The WIMS code in its various version is the most recognised lattice code. It is used in all parts of the world for calculations of research and power reactors. The version WIMSD-5B is distributed free of charge by NEA Data Bank. The description of its main features given in the present lecture follows the aspects defined previously for lattice calculations in the lecture on Reactor Lattice Transport Calculations. The spatial models are described, and the approach to the energy treatment is given. Finally the specific algorithm applied in fuel depletion calculations is outlined. (author)

  13. Critical Care Coding for Neurologists.

    Nuwer, Marc R; Vespa, Paul M

    2015-10-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  14. Lattice Index Coding

    Natarajan, Lakshmi; Hong, Yi; Viterbo, Emanuele

    2014-01-01

    The index coding problem involves a sender with K messages to be transmitted across a broadcast channel, and a set of receivers each of which demands a subset of the K messages while having prior knowledge of a different subset as side information. We consider the specific case of noisy index coding where the broadcast channel is Gaussian and every receiver demands all the messages from the source. Instances of this communication problem arise in wireless relay networks, sensor networks, and ...

  15. Towards advanced code simulators

    Scriven, A.H.

    1990-01-01

    The Central Electricity Generating Board (CEGB) uses advanced thermohydraulic codes extensively to support PWR safety analyses. A system has been developed to allow fully interactive execution of any code with graphical simulation of the operator desk and mimic display. The system operates in a virtual machine environment, with the thermohydraulic code executing in one virtual machine, communicating via interrupts with any number of other virtual machines each running other programs and graphics drivers. The driver code itself does not have to be modified from its normal batch form. Shortly following the release of RELAP5 MOD1 in IBM compatible form in 1983, this code was used as the driver for this system. When RELAP5 MOD2 became available, it was adopted with no changes needed in the basic system. Overall the system has been used for some 5 years for the analysis of LOBI tests, full scale plant studies and for simple what-if studies. For gaining rapid understanding of system dependencies it has proved invaluable. The graphical mimic system, being independent of the driver code, has also been used with other codes to study core rewetting, to replay results obtained from batch jobs on a CRAY2 computer system and to display suitably processed experimental results from the LOBI facility to aid interpretation. For the above work real-time execution was not necessary. Current work now centers on implementing the RELAP 5 code on a true parallel architecture machine. Marconi Simulation have been contracted to investigate the feasibility of using upwards of 100 processors, each capable of a peak of 30 MIPS to run a highly detailed RELAP5 model in real time, complete with specially written 3D core neutronics and balance of plant models. This paper describes the experience of using RELAP5 as an analyzer/simulator, and outlines the proposed methods and problems associated with parallel execution of RELAP5

  16. Cracking the Gender Codes

    Rennison, Betina Wolfgang

    2016-01-01

    extensive work to raise the proportion of women. This has helped slightly, but women remain underrepresented at the corporate top. Why is this so? What can be done to solve it? This article presents five different types of answers relating to five discursive codes: nature, talent, business, exclusion...... in leadership management, we must become more aware and take advantage of this complexity. We must crack the codes in order to crack the curve....

  17. Hedonic "adaptation"

    Paul Rozin

    2008-02-01

    Full Text Available People live in a world in which they are surrounded by potential disgust elicitors such as ``used'' chairs, air, silverware, and money as well as excretory activities. People function in this world by ignoring most of these, by active avoidance, reframing, or adaptation. The issue is particularly striking for professions, such as morticians, surgeons, or sanitation workers, in which there is frequent contact with major disgust elicitors. In this study, we study the ``adaptation'' process to dead bodies as disgust elicitors, by measuring specific types of disgust sensitivity in medical students before and after they have spent a few months dissecting a cadaver. Using the Disgust Scale, we find a significant reduction in disgust responses to death and body envelope violation elicitors, but no significant change in any other specific type of disgust. There is a clear reduction in discomfort at touching a cold dead body, but not in touching a human body which is still warm after death.

  18. Adaptation Laboratory

    Huq, Saleemul

    2011-11-15

    Efforts to help the world's poor will face crises in coming decades as climate change radically alters conditions. Action Research for Community Adapation in Bangladesh (ARCAB) is an action-research programme on responding to climate change impacts through community-based adaptation. Set in Bangladesh at 20 sites that are vulnerable to floods, droughts, cyclones and sea level rise, ARCAB will follow impacts and adaptation as they evolve over half a century or more. National and international 'research partners', collaborating with ten NGO 'action partners' with global reach, seek knowledge and solutions applicable worldwide. After a year setting up ARCAB, we share lessons on the programme's design and move into our first research cycle.

  19. PEAR code review

    De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.

    1997-07-01

    As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)

  20. KENO-V code

    Cramer, S.N.

    1984-01-01

    The KENO-V code is the current release of the Oak Ridge multigroup Monte Carlo criticality code development. The original KENO, with 16 group Hansen-Roach cross sections and P 1 scattering, was one ot the first multigroup Monte Carlo codes and it and its successors have always been a much-used research tool for criticality studies. KENO-V is able to accept large neutron cross section libraries (a 218 group set is distributed with the code) and has a general P/sub N/ scattering capability. A supergroup feature allows execution of large problems on small computers, but at the expense of increased calculation time and system input/output operations. This supergroup feature is activated automatically by the code in a manner which utilizes as much computer memory as is available. The primary purpose of KENO-V is to calculate the system k/sub eff/, from small bare critical assemblies to large reflected arrays of differing fissile and moderator elements. In this respect KENO-V neither has nor requires the many options and sophisticated biasing techniques of general Monte Carlo codes

  1. Code, standard and specifications

    Abdul Nassir Ibrahim; Azali Muhammad; Ab. Razak Hamzah; Abd. Aziz Mohamed; Mohamad Pauzi Ismail

    2008-01-01

    Radiography also same as the other technique, it need standard. This standard was used widely and method of used it also regular. With that, radiography testing only practical based on regulations as mentioned and documented. These regulation or guideline documented in code, standard and specifications. In Malaysia, level one and basic radiographer can do radiography work based on instruction give by level two or three radiographer. This instruction was produced based on guideline that mention in document. Level two must follow the specifications mentioned in standard when write the instruction. From this scenario, it makes clearly that this radiography work is a type of work that everything must follow the rule. For the code, the radiography follow the code of American Society for Mechanical Engineer (ASME) and the only code that have in Malaysia for this time is rule that published by Atomic Energy Licensing Board (AELB) known as Practical code for radiation Protection in Industrial radiography. With the existence of this code, all the radiography must follow the rule or standard regulated automatically.

  2. Adaptable positioner

    Labrador Pavon, I.

    1993-01-01

    This paper describes the circuits and programs in assembly language, developed to control the two DC motors that give mobility to a mechanical arm with two degrees of freedom. As a whole, the system is based in a adaptable regulator designed around a 8 bit microprocessor that, starting from a mode of regulation based in the successive approximation method, evolve to another mode through which, only one approximation is sufficient to get the right position of each motor. (Author) 22 fig. 6 ref

  3. Adaptive positioner

    Labrador Pavon, I.

    1993-01-01

    This paper describes the circuits and programs in assembly language, developed to control the two DC motors that give mobility to a mechanical arm with two degrees of freedom. As a whole, the system is based in a adaptable regulator designed around a 8 bit microprocessor that, starting from a mode of regulation based in the successive approximation method, evolve to another mode through which, only one approximation is sufficient to get the right position of each motor. (Author) 6 refs

  4. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content ...

  5. Adaptive ethnography

    Berth, Mette

    2005-01-01

    This paper focuses on the use of an adaptive ethnography when studying such phenomena as young people's use of mobile media in a learning perspective. Mobile media such as PDAs and mobile phones have a number of affordances which make them potential tools for learning. However, before we begin to...... formal and informal learning contexts. The paper also proposes several adaptive methodological techniques for studying young people's interaction with mobiles.......This paper focuses on the use of an adaptive ethnography when studying such phenomena as young people's use of mobile media in a learning perspective. Mobile media such as PDAs and mobile phones have a number of affordances which make them potential tools for learning. However, before we begin...... to design and develop educational materials for mobile media platforms we must first understand everyday use and behaviour with a medium such as a mobile phone. The paper outlines the research design for a PhD project on mobile learning which focuses on mobile phones as a way to bridge the gap between...

  6. Some aspects of adaptive transform coding of multispectral data

    Ahmed, N.; Natarajan, T.

    1977-01-01

    This paper concerns a data compression study pertaining to multi-spectral scanner (MSS) data. The motivation for this undertaking is the need for securing data compression of images obtained in connection with the Landsat Follow-On Mission, where a compression of at least 6:1 is required. The MSS data used in this study consisted of four scenes: Tristate, consisting of 256 pels per row and a total of 512 rows - i.e., (256x512), (2) Sacramento (256x512), (3) Portland (256x512), and (4) Bald Knob (200x256). All these scenes were on digital tape at 6 bits/pel. The corresponding reconstructed scenes of 1 bit/pel (i.e., a 6:1 compression) are included.

  7. Coevolution mechanisms that adapt viruses to genetic code ...

    Recent work on virus × host inter- ... of long-term interdependent symbiotic relationship between them. ... Evolution in species of living organisms occurs based on the .... their parents (Francino and Ochman 1999; Lynn et al. 2002; ... dently some dozens of times. ... in the families of certain viruses, bacteria, fungi and inverte-.

  8. Adaptive modeling of sky for video processing and coding applications

    Zafarifar, B.; With, de P.H.N.; Lagendijk, R.L.; Weber, Jos H.; Berg, van den A.F.M.

    2006-01-01

    Video content analysis for still- and moving images can be used for various applications, such as high-level semantic-driven operations or pixel-level contentdependent image manipulation. Within video content analysis, sky regions of an image form visually important objects, for which interesting

  9. Code of conduct for scientists (abstract)

    Khurshid, S.J.

    2011-01-01

    The emergence of advanced technologies in the last three decades and extraordinary progress in our knowledge on the basic Physical, Chemical and Biological properties of living matter has offered tremendous benefits to human beings but simultaneously highlighted the need of higher awareness and responsibility by the scientists of 21 century. Scientist is not born with ethics, nor science is ethically neutral, but there are ethical dimensions to scientific work. There is need to evolve an appropriate Code of Conduct for scientist particularly working in every field of Science. However, while considering the contents, promulgation and adaptation of Codes of Conduct for Scientists, a balance is needed to be maintained between freedom of scientists and at the same time some binding on them in the form of Code of Conducts. The use of good and safe laboratory procedures, whether, codified by law or by common practice must also be considered as part of the moral duties of scientists. It is internationally agreed that a general Code of Conduct can't be formulated for all the scientists universally, but there should be a set of 'building blocks' aimed at establishing the Code of Conduct for Scientists either as individual researcher or responsible for direction, evaluation, monitoring of scientific activities at the institutional or organizational level. (author)

  10. Modeling report of DYMOND code (DUPIC version)

    Park, Joo Hwan; Yacout, Abdellatif M.

    2003-04-01

    The DYMOND code employs the ITHINK dynamic modeling platform to assess the 100-year dynamic evolution scenarios for postulated global nuclear energy parks. Firstly, DYMOND code has been developed by ANL(Argonne National Laboratory) to perform the fuel cycle analysis of LWR once-through and LWR-FBR mixed plant. Since the extensive application of DYMOND code has been requested, the first version of DYMOND has been modified to adapt the DUPIC, MSR and RTF fuel cycle. DYMOND code is composed of three parts; the source language platform, input supply and output. But those platforms are not clearly distinguished. This report described all the equations which were modeled in the modified DYMOND code (which is called as DYMOND-DUPIC version). It divided into five parts;Part A deals model in reactor history which is included amount of the requested fuels and spent fuels. Part B aims to describe model of fuel cycle about fuel flow from the beginning to the end of fuel cycle. Part C is for model in re-processing which is included recovery of burned uranium, plutonium, minor actinide and fission product as well as the amount of spent fuels in storage and disposal. Part D is for model in other fuel cycle which is considered the thorium fuel cycle for MSR and RTF reactor. Part E is for model in economics. This part gives all the information of cost such as uranium mining cost, reactor operating cost, fuel cost etc

  11. Modeling report of DYMOND code (DUPIC version)

    Park, Joo Hwan [KAERI, Taejon (Korea, Republic of); Yacout, Abdellatif M [Argonne National Laboratory, Ilinois (United States)

    2003-04-01

    The DYMOND code employs the ITHINK dynamic modeling platform to assess the 100-year dynamic evolution scenarios for postulated global nuclear energy parks. Firstly, DYMOND code has been developed by ANL(Argonne National Laboratory) to perform the fuel cycle analysis of LWR once-through and LWR-FBR mixed plant. Since the extensive application of DYMOND code has been requested, the first version of DYMOND has been modified to adapt the DUPIC, MSR and RTF fuel cycle. DYMOND code is composed of three parts; the source language platform, input supply and output. But those platforms are not clearly distinguished. This report described all the equations which were modeled in the modified DYMOND code (which is called as DYMOND-DUPIC version). It divided into five parts;Part A deals model in reactor history which is included amount of the requested fuels and spent fuels. Part B aims to describe model of fuel cycle about fuel flow from the beginning to the end of fuel cycle. Part C is for model in re-processing which is included recovery of burned uranium, plutonium, minor actinide and fission product as well as the amount of spent fuels in storage and disposal. Part D is for model in other fuel cycle which is considered the thorium fuel cycle for MSR and RTF reactor. Part E is for model in economics. This part gives all the information of cost such as uranium mining cost, reactor operating cost, fuel cost etc.

  12. 3D equilibrium codes for mirror machines

    Kaiser, T.B.

    1983-01-01

    The codes developed for cumputing three-dimensional guiding center equilibria for quadrupole tandem mirrors are discussed. TEBASCO (Tandem equilibrium and ballooning stability code) is a code developed at LLNL that uses a further expansion of the paraxial equilibrium equation in powers of β (plasma pressure/magnetic pressure). It has been used to guide the design of the TMX-U and MFTF-B experiments at Livermore. Its principal weakness is its perturbative nature, which renders its validity for high-β calculation open to question. In order to compute high-β equilibria, the reduced MHD technique that has been proven useful for determining toroidal equilibria was adapted to the tandem mirror geometry. In this approach, the paraxial expansion of the MHD equations yields a set of coupled nonlinear equations of motion valid for arbitrary β, that are solved as an initial-value problem. Two particular formulations have been implemented in computer codes developed at NYU/Kyoto U and LLNL. They differ primarily in the type of grid, the location of the lateral boundary and the damping techniques employed, and in the method of calculating pressure-balance equilibrium. Discussions on these codes are presented in this paper. (Kato, T.)

  13. UEP Concepts in Modulation and Coding

    Werner Henkel

    2010-01-01

    Full Text Available First unequal error protection (UEP proposals date back to the 1960's (Masnick and Wolf; 1967, but now with the introduction of scalable video, UEP develops to a key concept for the transport of multimedia data. The paper presents an overview of some new approaches realizing UEP properties in physical transport, especially multicarrier modulation, or with LDPC and Turbo codes. For multicarrier modulation, UEP bit-loading together with hierarchical modulation is described allowing for an arbitrary number of classes, arbitrary SNR margins between the classes, and arbitrary number of bits per class. In Turbo coding, pruning, as a counterpart of puncturing is presented for flexible bit-rate adaptations, including tables with optimized pruning patterns. Bit- and/or check-irregular LDPC codes may be designed to provide UEP to its code bits. However, irregular degree distributions alone do not ensure UEP, and other necessary properties of the parity-check matrix for providing UEP are also pointed out. Pruning is also the means for constructing variable-rate LDPC codes for UEP, especially controlling the check-node profile.

  14. SPECTRAL AMPLITUDE CODING OCDMA SYSTEMS USING ENHANCED DOUBLE WEIGHT CODE

    F.N. HASOON

    2006-12-01

    Full Text Available A new code structure for spectral amplitude coding optical code division multiple access systems based on double weight (DW code families is proposed. The DW has a fixed weight of two. Enhanced double-weight (EDW code is another variation of a DW code family that can has a variable weight greater than one. The EDW code possesses ideal cross-correlation properties and exists for every natural number n. A much better performance can be provided by using the EDW code compared to the existing code such as Hadamard and Modified Frequency-Hopping (MFH codes. It has been observed that theoretical analysis and simulation for EDW is much better performance compared to Hadamard and Modified Frequency-Hopping (MFH codes.

  15. Nuclear code abstracts (1975 edition)

    Akanuma, Makoto; Hirakawa, Takashi

    1976-02-01

    Nuclear Code Abstracts is compiled in the Nuclear Code Committee to exchange information of the nuclear code developments among members of the committee. Enlarging the collection, the present one includes nuclear code abstracts obtained in 1975 through liaison officers of the organizations in Japan participating in the Nuclear Energy Agency's Computer Program Library at Ispra, Italy. The classification of nuclear codes and the format of code abstracts are the same as those in the library. (auth.)

  16. Some new ternary linear codes

    Rumen Daskalov

    2017-07-01

    Full Text Available Let an $[n,k,d]_q$ code be a linear code of length $n$, dimension $k$ and minimum Hamming distance $d$ over $GF(q$. One of the most important problems in coding theory is to construct codes with optimal minimum distances. In this paper 22 new ternary linear codes are presented. Two of them are optimal. All new codes improve the respective lower bounds in [11].

  17. ACE - Manufacturer Identification Code (MID)

    Department of Homeland Security — The ACE Manufacturer Identification Code (MID) application is used to track and control identifications codes for manufacturers. A manufacturer is identified on an...

  18. Algebraic and stochastic coding theory

    Kythe, Dave K

    2012-01-01

    Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.

  19. Optical coding theory with Prime

    Kwong, Wing C

    2013-01-01

    Although several books cover the coding theory of wireless communications and the hardware technologies and coding techniques of optical CDMA, no book has been specifically dedicated to optical coding theory-until now. Written by renowned authorities in the field, Optical Coding Theory with Prime gathers together in one volume the fundamentals and developments of optical coding theory, with a focus on families of prime codes, supplemented with several families of non-prime codes. The book also explores potential applications to coding-based optical systems and networks. Learn How to Construct

  20. RBMK-LOCA-Analyses with the ATHLET-Code

    Petry, A. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH Kurfuerstendamm, Berlin (Germany); Domoradov, A.; Finjakin, A. [Research and Development Institute of Power Engineering, Moscow (Russian Federation)

    1995-09-01

    The scientific technical cooperation between Germany and Russia includes the area of adaptation of several German codes for the Russian-designed RBMK-reactor. One point of this cooperation is the adaptation of the Thermal-Hydraulic code ATHLET (Analyses of the Thermal-Hydraulics of LEaks and Transients), for RBMK-specific safety problems. This paper contains a short description of a RBMK-1000 reactor circuit. Furthermore, the main features of the thermal-hydraulic code ATHLET are presented. The main assumptions for the ATHLET-RBMK model are discussed. As an example for the application, the results of test calculations concerning a guillotine type rupture of a distribution group header are presented and discussed, and the general analysis conditions are described. A comparison with corresponding RELAP-calculations is given. This paper gives an overview on some problems posed and experience by application of Western best-estimate codes for RBMK-calculations.